Question
How can I set up Ververica Platform in a multi-tenant Kubernetes environment and deploy Flink jobs to different Kubernetes namespaces?
Answer
Note: This applies to Ververica Platform 2.0 and later. Enterprise Editions only.
Without any configuration, Kubernetes uses the namespace `default` for every operation and so does Ververica Platform. In a multi-tenancy environment, however, you may want to run Flink jobs in dedicated namespaces for isolation reasons. This is how you create such a setup:
- Create the Kubernetes namespaces you want to set up, e.g. `devjobs`, `opsjobs`:
kubectl create namespace devjobs
kubectl create namespace opsjobs - Create your `values.yaml` file with the following content
rbac:
additionalNamespaces:
- devjobs
- opsjobsNote: You can also add the above into your existing `values.yaml` which was used to setup Ververica Platform. If you already have `rbac:additionalNamespaces` specified, you simply append the additional namespaces there.
- Now, use `helm upgrade` to grant Ververica Platform permissions in those namespaces
helm upgrade vvp ververica-platform-2.0.4.tgz --values values.yaml
- Verify the granted permissions via the following commands
kubectl describe role vvp-ververica-platform --namespace devjobs
kubectl describe role vvp-ververica-platform --namespace opsjobs - Now, in Ververica Platform's web UI, create a `DeploymentTarget` for each of the Kubernetes namespaces you created above. For the `devjobs` namespace, for example:
- The created
DeploymentTarget
can then be used to create aDeployment
:Note: A `DeploymentTarget` is always scoped to a Ververica Platform namespace ( `default` by default; not to be mixed up with any existing Kubernetes namespace). Be sure to select the platform namespace you want to create the `DeploymentTarget` for. In a multi-tenant setup, you can create additional platform namespaces via Ververica Platform's web UI and then create an appropriate `DeploymentTarget` in each of them.
Important: When you run Flink Jobs in a namespace different from the one in which Ververica Platform pod runs, make sure the access from Ververica Platform pod to your `jobmanager` pod via port `8081` is allowed by the configured network policies, if any. If no network policy is configured in your cluster, the cross-namespace access is allowed by default.