Product:TIBCO Spotfire Server
Versions:12.4.0 or higher
Summary:
Details the steps needed to get a new Spotfire Server deployed in a local Minikube Kubernetes (K8s) cluster. This is accomplished by using the Spotfire Cloud Deployment Kit (CDK).
Details:
This guidance is for those who want to quickly spin up a new Spotfire Server in a Kubernetes cluster to test issues when using the Spotfire Cloud Deployment Kit (CDK). The example in this article uses an Almalinux9 machine (mymachine.company.com), but the commands shown below are also expected to work with other yum/dnf compatible Linux distributions such as RHEL, Fedora, and CentOS.
Resolution:
1. Install Docker, Minikube, Kubectl, Git and Helm.
Docker:
We exit here (twice) to force a logout, so that we won't have to prepend docker commands with 'sudo' after we log back in.
Kubectl:
Git and Helm:
Minikube:
2. Start a new Kubernetes cluster using Minikube, and enable the container registry:
This will allow you to push the container images after you build them in a later step.
3. Install the PostgresSQL server:
4. Obtain the latest official release of the Spotfire CDK (which is version 1.4.0 at the time of this writing):
5. Copy the Spotfire Server installation files to the CDK's 'downloads' directory. It should look like this when you're done copying:
You can obtain these files from edelivery.tibco.com.
6. Navigate to the CDK's 'containers' directory and build the container images:
This will take a few minutes to complete. Check the Issues for the CDK on github if you run into any errors. Sometimes, a manual change to the Dockerfile is needed and will be corrected in a future CDK release.
7. After the images are built, you can push them to your K8s container registry:
This will take a few minutes to complete.
8. Navigate to the CDK's 'helm' directory, and build the charts:
9. Navigate to the CDK's spotfire-server helm directory, and deploy the Spotfire Server:
Here, we have set the helm release name to something descriptive: 'tss1240remotedb' (to indicate that this is a Spotfire Server 12.4.0 deployment, and it uses a remote database). In addition, we have set the Spotfire Server's public port to 8081. If this port is not available on your machine, choose a different port that is available.
You should now see the following pods running in your K8s cluster:
10. In a separate terminal, forward the local tcp port 8081 to the K8s port configured for the *-haproxy pod (which is 80 by default):
This will allow remote clients to connect to the Spotfire Server over port 8081.
11. Extract the Spotfire Server admin user's password by decoding the K8s secret:
12. In a web browser, navigate to the server's public address that you configured above:
http://mymachine.company.com:8081
Login with username 'admin', using the password you extracted/decoded in the previous step.
Versions:12.4.0 or higher
Summary:
Details the steps needed to get a new Spotfire Server deployed in a local Minikube Kubernetes (K8s) cluster. This is accomplished by using the Spotfire Cloud Deployment Kit (CDK).
Details:
This guidance is for those who want to quickly spin up a new Spotfire Server in a Kubernetes cluster to test issues when using the Spotfire Cloud Deployment Kit (CDK). The example in this article uses an Almalinux9 machine (mymachine.company.com), but the commands shown below are also expected to work with other yum/dnf compatible Linux distributions such as RHEL, Fedora, and CentOS.
Resolution:
1. Install Docker, Minikube, Kubectl, Git and Helm.
Docker:
$ sudo dnf -y upgrade $ sudo dnf -y install yum-utils $ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo $ sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin --allowerasing $ sudo systemctl enable docker $ sudo systemctl start docker $ sudo usermod -aG docker $USER $ newgrp docker $ exit $ exit
We exit here (twice) to force a logout, so that we won't have to prepend docker commands with 'sudo' after we log back in.
Kubectl:
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" $ curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" $ echo "$(cat kubectl.sha256) kubectl" | sha256sum --check kubectl: OK $ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl $ kubectl version
Git and Helm:
$ sudo yum -y install git $ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 $ chmod 700 get_helm.sh $ ./get_helm.sh $ helm --help
Minikube:
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 $ sudo install minikube-linux-amd64 /usr/local/bin/minikube
2. Start a new Kubernetes cluster using Minikube, and enable the container registry:
$ minikube start --cpus 4 --memory 8192 $ minikube addons enable registryIt is recommended that you provision a minimum of 8GB of memory to minikube. In a separate terminal, forward an available local port (e.g. 5000) to the Kubernetes registry service port (80):
$ kubectl port-forward --namespace kube-system service/registry 5000:80
This will allow you to push the container images after you build them in a later step.
3. Install the PostgresSQL server:
$ helm repo add bitnami https://charts.bitnami.com/bitnami $ helm install vanilla-tssdb bitnami/postgresql $ export POSTGRES_PASSWORD=$(kubectl get secret \ --namespace default vanilla-tssdb-postgresql \ -o jsonpath="{.data.postgres-password}" | base64 --decode)
4. Obtain the latest official release of the Spotfire CDK (which is version 1.4.0 at the time of this writing):
$ git clone https://github.com/TIBCOSoftware/spotfire-cloud-deployment-kit.git -b v1.4.0
5. Copy the Spotfire Server installation files to the CDK's 'downloads' directory. It should look like this when you're done copying:
$ ls -al ~/spotfire-cloud-deployment-kit/containers/downloads/ total 1868370 drwxr-xr-x 2 myuser svc_dev_str_server_admins 10 Jun 28 13:48 . drwxr-xr-x 13 myuser svc_dev_str_server_admins 15 Jun 26 13:58 .. -rw-r--r-- 1 myuser svc_dev_str_server_admins 52 Jun 26 13:58 .gitignore -rwxr-xr-x 1 myuser svc_dev_str_server_admins 248434933 Jun 30 14:30 Spotfire.Dxp.netcore-linux.sdn -rwxr-xr-x 1 myuser svc_dev_str_server_admins 150882007 Jun 30 14:31 Spotfire.Dxp.PythonServiceLinux.sdn -rwxr-xr-x 1 myuser svc_dev_str_server_admins 419967725 Jun 30 14:31 Spotfire.Dxp.sdn -rwxr-xr-x 1 myuser svc_dev_str_server_admins 128760600 Jun 30 14:31 Spotfire.Dxp.TerrServiceLinux.sdn -rwxr-xr-x 1 myuser svc_dev_str_server_admins 318626713 Jun 30 14:31 TIB_sfire_server_12.4.0_languagepack-multi.zip -rwxr-xr-x 1 myuser svc_dev_str_server_admins 267634757 Jun 30 14:31 tsnm-12.4.0.x86_64.tar.gz -rwxr-xr-x 1 myuser svc_dev_str_server_admins 376060615 Jun 30 14:31 tss-12.4.0.x86_64.tar.gz
You can obtain these files from edelivery.tibco.com.
6. Navigate to the CDK's 'containers' directory and build the container images:
$ cd ~/spotfire-cloud-deployment-kit/containers/ $ make build
This will take a few minutes to complete. Check the Issues for the CDK on github if you run into any errors. Sometimes, a manual change to the Dockerfile is needed and will be corrected in a future CDK release.
7. After the images are built, you can push them to your K8s container registry:
$ make REGISTRY=localhost:5000 push
This will take a few minutes to complete.
8. Navigate to the CDK's 'helm' directory, and build the charts:
$ cd ~/spotfire-cloud-deployment-kit/helm/ $ make
9. Navigate to the CDK's spotfire-server helm directory, and deploy the Spotfire Server:
$ cd ~/spotfire-cloud-deployment-kit/helm/charts/spotfire-server/ $ helm install tss1240remotedb . \ --set acceptEUA=true \ --set global.spotfire.image.registry="localhost:5000" \ --set global.spotfire.image.pullPolicy="Always" \ --set database.bootstrap.databaseUrl="jdbc:postgresql://vanilla-tssdb-postgresql.default.svc.cluster.local/" \ --set database.create-db.databaseUrl="jdbc:postgresql://vanilla-tssdb-postgresql.default.svc.cluster.local/" \ --set database.create-db.adminUsername="postgres" \ --set database.create-db.adminPassword="$POSTGRES_PASSWORD" \ --set database.create-db.enabled=true \ --set configuration.site.publicAddress="http://mymachine.company.com:8081"
Here, we have set the helm release name to something descriptive: 'tss1240remotedb' (to indicate that this is a Spotfire Server 12.4.0 deployment, and it uses a remote database). In addition, we have set the Spotfire Server's public port to 8081. If this port is not available on your machine, choose a different port that is available.
You should now see the following pods running in your K8s cluster:
$ kubectl get pod NAME READY STATUS RESTARTS AGE tss1240remotedb-cli-6dfd6b4794-w7hln 0/1 Running 0 2s tss1240remotedb-config-job-1-jqmst 0/1 Running 0 2s tss1240remotedb-haproxy-59cf897595-ct5qw 0/1 Running 0 2s tss1240remotedb-log-forwarder-c785fd96d-dnkrq 0/1 Running 0 2s tss1240remotedb-spotfire-server-99d88f544-vqdsq 0/2 Running 0 2s
10. In a separate terminal, forward the local tcp port 8081 to the K8s port configured for the *-haproxy pod (which is 80 by default):
$ kubectl port-forward --address 0.0.0.0 --namespace default service/tss1240remotedb-haproxy 8081:80
This will allow remote clients to connect to the Spotfire Server over port 8081.
11. Extract the Spotfire Server admin user's password by decoding the K8s secret:
$ export SPOTFIREADMIN_PASSWORD=$(kubectl get secrets \ --namespace default tss1240remotedb-spotfire-server \ -o jsonpath="{.data.SPOTFIREADMIN_PASSWORD}" | base64 --decode) $ echo $SPOTFIREADMIN_PASSWORD wXjiuqoDet9G
12. In a web browser, navigate to the server's public address that you configured above:
http://mymachine.company.com:8081
Login with username 'admin', using the password you extracted/decoded in the previous step.