Kiali Logo

Installation Guide

Prerequisites

Kiali Version Requirements

Kiali requires a supported version of Istio. The Istio news page posts end-of-support (EOL) dates. Supported Kiali versions include only the Kiali version bundled with Istio, or newer.

Version Compatibility

Istio Kiali Notes

1.8.0 or later

1.26.0 or later

Istio 1.8 removes all support for mixer/telemetry V1, as does Kiali 1.26.0. Use earlier versions of Kiali for mixer support.

1.7.0 or later

1.22.1 or later

Istio 1.7 istioctl will no longer install Kiali. Use the Istio samples/addons all-in-one yaml or the Kiali Helm Chart for quick demo installs.

1.6.0 or later

1.18.1 or later

Istio 1.6 introduces CRD and Config changes, Kiali 1.17 is recommended for Istio < 1.6.

⇐ 1.5

n/a

Out of support, users should use Istio >= 1.6.

 If you are running Red Hat OpenShift Service Mesh (RH OSSM) then use only the bundled version of Kiali.

Browser Version Requirements

Kiali requires a modern web browser and supports the last two versions of Chrome, Firefox, Safari or Edge.

Hardware Requirements

Any machine capable of running Red Hat OpenShift/OKD should also be able to run Kiali. For production environments this usually means:

  • For masters: 16GB RAM, 4vCPUs, 40GB of hard disk space.

  • For nodes: 8GB RAM, 1vCPU, 16GB of hard disk space.

For development the requirements are lower, depending on how demanding your applications are, and how many services you’re planning on running at the same time on your machine. Of course, your situation may vary, so plan accordingly. Keep in mind that Kiali is not especially demanding of your machine resources and, as an isolated part of your environment, should not affect your applications throughput or latency.

For more, see the OKD install documentation.

Helm Requirements

If you are going to install the Helm Chart, you must use Helm 3. You can install Helm by following the Helm installation docs.

OpenShift Requirements

If you are installing on OpenShift, you must grant the cluster-admin role to the user that is installing Istio and Kiali. If OpenShift is installed locally on the machine you are using, the following command should log you in as user system:admin which has this cluster-admin role:

  oc login -u system:admin

 For most commands listed on this page the Kubernetes CLI command kubectl is used to interact with the cluster environment. On OpenShift you can simply replace kubectl with oc, unless otherwise noted.

Google Cloud Private Cluster Requirements

Private clusters on Google Cloud have network restrictions. Kiali needs your cluster to allow access from the Kubernetes API to the Istio Control Plane namespace, for both the 8080 and 15000 ports. You must update your firewall with those two ports.

To review the master access firewall rule:

gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"

To replace the existing rule and allow master access:

gcloud compute firewall-rules update <firewall-rule-name> --allow <previous-ports>,tcp:8080,tcp:15000

  Istio deployments on private clusters also need extra ports to be opened. Check the Istio installation page for GKE to see all the extra installation steps for this platform.

Installation

  If you previously installed Kiali through some other mechanism, you must uninstall it first using that original mechanism’s uninstall procedures. There is no migration path between older installation mechanisms and the install mechanisms explained below.

Quick Install

If you want to install for demos or just to "kick the tires" on Kiali, it may be easier to install using the Quick Start instructions instead.

Install Kiali via Istio or Maistra

  Starting with Istio 1.7, Kiali is no longer shipped with the istioctl installer. If you are using an Istio implementation of version 1.7 or higher, skip this section and go on to the next.

If you prefer to use the latest Kiali version, complete the Istio or Maistra installation and then Install Kiali Latest.

To install Kiali as part of the Istio installation just follow the Istio Setup docs. If you are running on OpenShift and prefer Maistra, see Maistra Setup docs. You may then continue to Open the UI.

Upgrade the Istio or Maistra Version of Kiali

The version of Kiali packaged with Istio or Maistra may not contain the most recent features and fixes. To upgrade to the latest version of Kiali first Uninstall Kiali Operator and Kiali and then proceed to Install Kiali Latest.

Install Kiali Latest

Installing the latest version of Kiali is done using the Kiali Operator. The Kiali Operator is a Kubernetes Operator. The Kiali Operator manages your Kiali installation. The Kiali Operator watches the Kiali Custom Resource (CR), a YAML file that holds the Kiali configuration. When you modify the Kiali CR, the operator installs, updates, or uninstalls Kiali as needed.

The next few sections describe the different installation methods available for installing the Kiali Operator. Depending on the installation method you use, the operator may then install Kiali. Before you install Kiali, you will want to know which authentication strategy Kiali should use (this determines how users are to log into Kiali). See the authentication configuration page.

  It is only necessary to install the Kiali Operator one time. After the operator is installed you need only create or edit the Kiali CR (see Create or Edit the Kiali CR). The Kiali ConfigMap will be managed by the Kiali Operator and should not be manually edited. As soon as the Kiali operator is installed and running, there is no need to again perform one of the installations below.

OperatorHub

For production environments that have OperatorHub installed (OpenShift comes with OperatorHub out-of-box), you may want to install Kiali Operator using OperatorHub. Simply go to the OperatorHub console and install Kiali Operator. At that point, you can create the Kiali CR to install Kiali.

Helm Chart

For production environments that do not have OperatorHub, it is recommended that you use the Kiali Operator Helm Chart located on the Kiali Helm Chart Repository at https://kiali.org/helm-charts. There are two separate Helm Charts: kiali-operator and kiali-server. The kiali-operator Helm Chart installs the operator and the operator in turn installs Kiali when you create a Kiali CR whereas the kiali-server Helm Chart installs a standalone Kiali Server that can be used to manage Kiali without the operator.

  The Kiali Helm Charts require Helm v3

The Kiali Operator Helm Chart is configurable. You can see the default values.yaml here.

To install the latest Kiali Operator along with a Kiali CR (which triggers a Kiali Server to be installed in istio-system namespace) using the Helm Chart, you can run this:

$ kubectl create namespace kiali-operator
$ helm install \
    --set cr.create=true \
    --set cr.namespace=istio-system \
    --namespace kiali-operator \
    --repo https://kiali.org/helm-charts \
    kiali-operator \
    kiali-operator

  To install a specific version X.Y.Z, simply pass --version X.Y.Z to the helm command

  All Kiali CR settings have analogous helm values. Anything under Kiali CR spec can usually be specified via a helm --set option when using the kiali-server helm chart. For example, the Kiali CR setting spec.server.web_fqdn has an analogous value when using the kiali-server helm chart that you can set via --set server.web_fdqn=<your value>. If you do not use the kiali-server helm chart but instead you want to install the Kiali operator using the kiali-operator helm chart, and you want that helm chart to also install a Kiali CR for you, that’s done under the cr value. For example, the Kiali CR setting spec.server.web_fqdn can be set via --set cr.spec.server.web_fqdn=<your value> (of course, this assumes you also use --set cr.create=true which tells the kiali-operator helm chart to create the Kiali CR for you).

This installation method gives Kiali access to existing namespaces as well as namespaces created later. See Namespace Management for more information if you want to change that behavior.

Operator-Only Install

To install only the Kiali Operator and not a Kiali CR, simply pass --set cr.create=false to the helm command. This option is good when you plan to customize the Kiali CR and the resulting Kiali Server installation.

After the Kiali Operator is installed and running, go to Create or Edit the Kiali CR for the customized Kiali installation.

Kiali-Only Install

To install the Kiali Server without the operator, use the kiali-server Helm Chart:

  helm install \
    --namespace istio-system \
    --repo https://kiali.org/helm-charts \
    kiali-server \
    kiali-server

Create or Edit the Kiali CR

The Kiali Operator watches the Kiali CR. Creating, updating, or removing a Kiali CR will trigger the Kiali Operator to install, update, or remove Kiali. This assumes the Kiali Operator has already been installed.

To create an initial Kiali CR file it is recommended to copy the fully documented example Kiali CR YAML file. Edit that file being careful to maintain proper formatting, and save it to a local file such as my-kiali-cr.yaml.

  It is important to understand the spec.deployment.accessible_namespaces setting in the CR. See Accessible Namespaces for more information.

  The Kiali ConfigMap will be managed by the Kiali Operator and should not be manually edited.

To install Kiali, create the Kiali CR using the local file by running a command similar to this (note: the typical Kiali CR is normally installed in the Istio control plane namespace):

  kubectl apply -f my-kiali-cr.yaml -n istio-system

To update Kiali, edit and save the existing Kiali CR; for example:

  kubectl edit kiali kiali -n istio-system

Open the UI

Once Istio, Maistra, or the Kiali Operator has installed Kiali, and the Kiali pod has successfully started, you can access the UI. Please, check the FAQ: How do I access Kiali UI?

  The credentials you use on the login screen depend on the authentication strategy that was configured for Kiali. See the authentication configuration page for more details.

Uninstall

Uninstall Kiali Only

To remove Kiali is simple - just delete the Kiali CR. To trigger the Kiali Operator to uninstall Kiali run a command similar to this (note: the typical Kiali CR name is kiali and you normally install it in the Istio control plane namespace):

  kubectl delete kiali kiali -n istio-system

Once deleted, you have no Kiali installed, but you still have the Kiali Operator running. You could create another Kiali CR with potentially different configuration settings to install a new Kiali instance.

Uninstall Kiali Operator and Kiali

If you installed Kiali Operator using OperatorHub, use OperatorHub to uninstall. Otherwise, to uninstall everything related to Kiali (Kiali Operator, Kiali, etc.) you will want to use Helm.

To uninstall, first you must ensure all Kiali CRs that are being watched by the operator are deleted. This gives the operator a chance to uninstall the Kiali Servers before you remove the operator itself.

  If you fail to delete the Kiali CRs first, your cluster may not be able to delete the namespace where the CR is deployed and remnants from the Kiali Server will not be deleted.

After you have successfully deleted the Kiali CRs, then you can uninstall the Kiali Operator using Helm. Because Helm does not delete CRDs, you have to do that in order to clean up everything. For example:

  helm uninstall --namespace kiali-operator kiali-operator
  kubectl delete crd monitoringdashboards.monitoring.kiali.io
  kubectl delete crd kialis.kiali.io

Known Problem: Uninstall Hangs

If the uninstall hangs (typically due to failing to delete all Kiali CRs prior to uninstalling the operator) the following may help to resolve the problem. You basically want to clear the finalizer from the Kiali CRs causing the hang.

  If you installed the Kiali CR in a different namespace, replace istio-system in the command with the namespace in which the Kiali CR is located. The below command also assumes the Kiali CR is named kiali.

  kubectl patch kiali kiali -n istio-system -p '{"metadata":{"finalizers": []}}' --type=merge

Note that even if this fixes the hang problem, you may still have remnants of the Kiali Server in your cluster. You will manually need to delete those resources.

Upgrade

Upgrading Istio

In general, Kiali tries to maintain compatibility between at least two most recent minor versions of Istio e.g. Kiali 1.26.0 is compatible with both Istio 1.7 and 1.8. If you are upgrading one minor version of Istio, before upgrading Istio, upgrade Kiali to the most recent version that supports that version of Istio. See the version compatibility section for more details on which versions are supported.

Canary

  1. Deploy the upgraded Istio control-plane.

  2. Update the Kiali Server configuration to point to the new Istio deployment. These fields need to be updated:

    • spec.external_services.istio.config_map_name to the new Istio configmap revision.

    • spec.external_services.istio.istiod_deployment_name to the new istio deployment revision.

    • spec.external_services.istio.istio_sidecar_injector_config_map_name to the new istio sidecar injector configmap revision.

How you update these fields depends on how you have deployed Kiali.

Operator

If you are using the kiali-operator, update the Kiali CR configuration:

  kubectl patch kialis kiali -n kiali-operator \
    -p '{"spec": {"external_services": {"istio": {"config_map_name": "istio-canary", "istiod_deployment_name": "istiod-canary", "istio_sidecar_injector_config_map_name": "istio-sidecar-injector-canary"}}}}' \
    --type=merge

Wait until the operator restarts the Kiali Server and then verify everything is working correctly.

Helm Chart - Kiali Server

If you are using the kiali-server Helm Chart, set the Helm Chart values:

  helm upgrade \
    --set external_services.istio.config_map_name=istio-canary \
    --set external_services.istio.istio_sidecar_injector_config_map_name=istio-sidecar-injector-canary \
    --set external_services.istio.istiod_deployment_name=istiod-canary \
    --repo https://kiali.org/helm-charts \
    --namespace istio-system \
    kiali-server kiali-server

Then restart the Kiali pod to pickup the new configmap changes.

  kubectl rollout restart deployments kiali -n istio-system

Upgrading Kiali

Operator

When upgrading Kiali using the operator, it’s recommended to keep the operator version in sync with the Kiali version. The best way to do this is to ensure that your Kiali CR’s spec.deployment.image_version field is set to operator_version (this is the default if this value is left unset) then upgrade the operator version. When the operator upgrade completes, the operator will upgrade the Kiali Server image version to match its own image version.

Helm Chart - Kiali Server

If you are managing the Kiali Server through the Helm Chart and not with the operator, you can upgrade Kiali by upgrading the version of the kiali-server Helm Chart.

Additional Notes

Versioning

When you install the Kiali Operator, it will be configured to install a Kiali Server that is the same version as the operator itself. For example, if you have Kiali Operator v1.34.0 installed, that operator will install Kiali Server v1.34.0. If you want to upgrade (or downgrade) the Kiali Server then upgrade (or downgrade) the Kiali Operator so it can install the Kiali Server you want.

However, there are certain use-cases in which you want the Kiali Operator to install a Kiali Server whose version is different than the operator version. There are also certain use-cases when you want the Kiali Operator to install a Kiali Server whose container image is found in a different image repository or has a different image name (the default image repository and name is quay.io/kiali/kiali). To support these use-cases, the Kiali CR provides the settings spec.deployment.image_name and spec.deployment.image_version. When you set one or both, the Kiali Operator will use those settings to override the defaults when it determines what Kiali Server image to install. However, for security reasons these settings are disabled by default when you install the Kiali Operator via the Helm Chart. To allow the operator to install a non-default Kiali Server you must set the "allow ad-hoc kiali image" helm setting to true. If you already have an operator installed and you want to enable this feature, you can do so by following the instructions found at FAQ: How to configure some operator features at runtime.

Customize the Kiali UI web context root

By default, when installed on OpenShift, the Kiali UI is deployed to the root context path of "/", for example https://kiali-istio-system.<your_cluster_domain_or_ip>/. In some situations, such as when you want to serve the Kiali UI along with other apps under the same host name, for example, example.com/kiali, example.com/app1, you can edit the Kiali CR and provide a different value for spec.server.web_root. The path must begin with a / and not end with a / (e.g. /kiali or /mykiali).

An example of custom web root:

server:
  web_root: /kiali

The above is the default when Kiali is installed on Kubernetes - so to access the Kiali UI on Kubernetes you access it at the root context path of /kiali.

You can also set the FQDN and port for the resulting service (in case you are using an Istio VirtualService or a kubernetes ingress that does not set the proper params) using the settings spec.server.web_fqdn and spec.server.web_port, as shown in the example:

server:
  web_fqdn: mykiali.mydomain.com
  web_port: 443

Namespace Management

Accessible Namespaces

The Kiali CR tells the Kiali Operator which namespaces are accessible to Kiali. It is specified in the CR via the accessible_namespaces setting under the main deployment section.

The specified namespaces are those that have service mesh components to be observed by Kiali. Additionally, the namespace to which Kiali is installed must be accessible (typically the same namespace as Istio). Each list entry can be a regex matched against all namespaces the operator can see. If not set in the Kiali CR, then the default behavior makes all current namespaces accessible except for some internal namespaces that should typically be ignored.

  When installing multiple Kiali instances into a single cluster, accessible_namespaces must be mutually exclusive. In other words, a namespace must appear in at most one accessible_namespaces list. Regular expressions must not have overlapping patterns.

As an example, if Kiali is to be installed in the istio-system namespace, and is expected to monitor all namespaces prefixed with mycorp_ the setting would be:

deployment:
  accessible_namespaces:
  - istio-system
  - mycorp_.*

  A cluster can have at most one Kiali instance with accessible_namespaces set to the special value of ** (two asterisks). This denotes that Kiali be given access to all namespaces that currently exist, or that will be created in the future, via a single cluster role. This can be in addition to Kiali instances configured with explicit, mutually exclusive namespaces, as mentioned above.

  If the operator was installed via Helm but not installed with the option clusterRoleCreator: true then you cannot later edit the Kiali CR and change accessible_namespaces to **. You must reinstall the operator so that it can be granted the additional permissions required (--set clusterRoleCreator=true). Note that by default the Kiali Operator Helm Chart will install the operator with clusterRoleCreator set to true.

Maistra supports multi-tenancy and the accessible_namespaces extends that feature to Kiali. However, explicit naming of accessible namespaces can benefit non-Maistra installations as well - with it Kiali does not need cluster roles and the Kiali Operator does not need permissions to create cluster roles.

Excluded Namespaces

The Kiali CR tells the Kiali Operator which accessible namespaces should be excluded from the list of namespaces provided by the API and UI. This can be useful if wildcards are used when specifying Accessible Namespaces. This setting has no effect on namespace accessibility. It is only a filter, not security-related.

For example, if my accessible_namespaces include mycorp_.* but I don’t want to see test namespaces, I could add to the default entries:

api:
  namespaces:
    exclude:
      - istio-operator
      - kiali-operator
      - ibm.*
      - kube.*
      - openshift.*
      - mycorp_test.*

Namespace Selectors

To fetch a subset of the available namespaces, Kiali supports an optional label selector. This selector is especially useful when spec.deployment.accessible_namespaces is set to ["**"] but you want to reduce the namespaces presented in the UI’s namespace list.

The label selector is defined in the Kiali CR setting spec.api.namespaces.label_selector.

The example below selects all namespaces that have a label kiali-enabled: true:

api:
  namespaces:
    label_selector: kiali-enabled=true

For further information on how this label_selector interacts with spec.deployment.accessible_namespaces read the technical documentation.

To label a namespace you can use the following command. For more information see the official documentation.

  kubectl label namespace my-namespace kiali-enabled=true

Note that when deploying multiple control planes in the same cluster, you will want to set the label selector’s value unique to each control plane. This allows each Kiali instance to select only the namespaces relevant to each control plane. Because in this "soft-multitenancy" mode spec.deployment.accessible_namespaces is typically set to an explicit set of namespaces (i.e. not ["**"]), you do not have to do anything with this label_selector. This is because the default value of label_selector is kiali.io/member-of: <spec.istio_namespace> when spec.deployment.accessible_namespaces is not set to the "all namespaces" value ["**"]. This allows you to have multiple control planes in the same cluster, with each control plane having its own Kiali instance. If you set your own Kiali instance name in the Kiali CR (i.e. you set spec.deployment.instance_name to something other than kiali), then the default label will be kiali.io/<spec.deployment.instance_name>.member-of: <spec.istio_namespace>.

Installing Multiple Instances of Kiali

By installing a single Kiali operator in your cluster, you can install multiple instances of Kiali by simply creating multiple Kiali CRs. For example, if you have two Istio control planes in namespaces istio-system and istio-system2, you can put a Kiali CR in each of those namespaces to install a Kiali instance in each control plane.

If you wish to install multiple Kiali instances in the same namespace, or if you need the Kiali instance to have different resource names than the default of kiali, you can specify spec.deployment.instance_name in your Kiali CR. The value for that setting will be used to create a unique instance of Kiali using that instance name rather than the default kiali. One use-case for this is to be able to have unique Kiali service names across multiple Kiali instances in order to be able to use certain routers/load balancers that require unique service names.

  Since the spec.deployment.instance_name field is used for the Kiali resource names, including the Service name, you must ensure the value you assign this setting follows the Kubernetes DNS Label Name rules. If it does not, the operator will abort the installation. And note that because Kiali uses this as a prefix (it may append additional characters for some resource names) its length is limited to 40 characters.

Reducing Permissions in OpenShift

By default, Kiali will run with its cluster role. It provides some read-write capabilities so Kiali can add, modify, or delete some service mesh resources to perform tasks such as adding and modifying Istio destination rules in any namespace.

If you prefer not to run Kiali with this read-write role across the cluster, it is possible to reduce these permissions to individual namespaces.

  This only works for OpenShift since it can return a list of namespaces that a user has access to. Know how to make this work with Kubernetes? Awesome, please let us know in this issue.

The first thing you will need to do is to remove the cluster-wide permissions that are granted to Kiali by default:

  oc delete clusterrolebindings kiali

Then you will need to grant the kiali role in the namespace of your choosing:

  oc adm policy add-role-to-user kiali system:serviceaccount:istio-system:kiali-service-account -n ${NAMESPACE}

You can alternatively tell the Kiali Operator to install Kiali in "view only" mode (this does work for either OpenShift or Kubernetes). You do this by setting the view_only_mode to true in the Kiali CR:

deployment:
  view_only_mode: true

This allows Kiali to read service mesh resources found in the cluster, but it does not allow Kiali to add, modify, or delete them.

Development Install

This option installs the latest Kiali Operator and Kiali Server images from the master branch. It also allows Kiali to access all current and future namespaces. This option is good for demo and development installations.

kubectl create namespace kiali-operator
helm install \
  --set cr.create=true \
  --set cr.namespace=istio-system \
  --set cr.spec.deployment.image_version=latest \
  --set image.tag=latest \
  --namespace kiali-operator \
  --repo https://kiali.org/helm-charts \
  kiali-operator \
  kiali-operator