Pas Apicella

Subscribe to Pas Apicella feed
Information on VMware Tanzu : Build, run and manage apps on any cloud Pas Apicellahttp://www.blogger.com/profile/09389663166398991762noreply@blogger.comBlogger444125
Updated: 58 min 57 sec ago

GitHub Actions to deploy Spring Boot application to Tanzu Application Service for Kubernetes

Wed, 2020-06-17 21:28
In this demo I show how to deploy a simple Spring boot application using GitHub Actions onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below.
  
$ kapp list
Target cluster 'https://35.189.13.31' (nodes: gke-tanzu-gke-lab-f67-np-f67b23a0f590-abbca04e-5sqc, 8+)

Apps in namespace 'default'

Name Namespaces Lcs Lca
certmanager-cluster-issuer (cluster) true 8d
externaldns (cluster),external-dns true 8d
harbor-cert harbor true 8d
tas (cluster),cf-blobstore,cf-db,cf-system, false 8d
cf-workloads,cf-workloads-staging,istio-system,kpack,
metacontroller
tas4k8s-cert cf-system true 8d

Lcs: Last Change Successful
Lca: Last Change Age

5 apps

Succeeded

The demo exists on GitHub using the following URL, to follow along simply use your own GitHub repository making the changes as detailed below. The example below is for a Spring Boot application so your YAML file for the action would differ for non Java applications but there are many starter templates to choose from for other programming languages.

https://github.com/papicella/github-boot-demo



GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub

1. Create a folder at the root of your project source code as follows

$ mkdir ".github/workflows"

2. In ".github/workflows" folder, add a .yml or .yaml file for your workflow. For example, ".github/workflows/maven.yml"

3. Use the "Workflow syntax for GitHub Actions" reference documentation to choose events to trigger an action, add actions, and customize your workflow. In this example the YML "maven.yml" looks as follows.

maven.yml
  
name: Java CI with Maven and CD with CF CLI

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up JDK 11.0.5
uses: actions/setup-java@v1
with:
java-version: 11.0.5
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: push to TAS4K8s
env:
CF_USERNAME: ${{ secrets.CF_USERNAME }}
CF_PASSWORD: ${{ secrets.CF_PASSWORD }}
run: |
curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
./cf api https://api.tas.lab.pasapples.me --skip-ssl-validation
./cf auth $CF_USERNAME $CF_PASSWORD
./cf target -o apples-org -s development
./cf push -f manifest.yaml

Few things here around the YML Workflow syntax for the GitHub Action above

  • We are using a maven action sample which will FIRE on a push or pull request on the master branch
  • We are using JDK 11 rather then Java 8
  • 3 Steps exists here
    • Setup JDK
    • Maven Build/Package
    • CF CLI Push to TAS4K8s using the built JAR artifact from the maven build
  • We download the CF CLI into ubuntu image 
  • We have masked the username and password using Secrets

4. Next in the project root add a manifest YAML for deployment to TAS4K8s

- Add a manifest.yaml file in the project root to deploy our simple Spring boot RESTful application

---
applications:
  - name: github-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

5. Now we need to add Secrets to the Github repo which are referenced in out "maven.yml" file. In our case they are as follows.
  • CF_USERNAME 
  • CF_PASSWORD
In your GitHub repository click on "Settings" tab then on left hand side navigation bar click on "Secrets" and define your username and password for your TAS4K8s instance as shown below



6. At this point that is all we need to test our GitHub Action. Here in IntelliJ IDEA I issue a commit/push to trigger the GitHub action



7. If all went well using "Actions" tab in your GitHub repo will show you the status and logs as follows






8. Finally our application will be deployed to TAS4K8s as shown below and we can invoke it using HTTPie or CURL for example
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name requested state instances memory disk urls
github-TAS4K8s-boot-demo started 1/1 1G 1G github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
my-springboot-app started 1/1 1G 1G my-springboot-app.apps.tas.lab.pasapples.me
test-node-app started 1/1 1G 1G test-node-app.apps.tas.lab.pasapples.me

$ cf app github-TAS4K8s-boot-demo
Showing health and status for app github-TAS4K8s-boot-demo in org apples-org / space development as pas...

name: github-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
last uploaded: Thu 18 Jun 12:03:19 AEST 2020
stack:
buildpacks:

type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-18T02:03:32Z 0.2% 136.5M of 1G 0 of 1G

$ http http://github-tas4k8s-boot-demo.apps.tas.lab.pasapples.me
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Thu, 18 Jun 2020 02:07:39 GMT
server: istio-envoy
x-envoy-upstream-service-time: 141

Thu Jun 18 02:07:39 GMT 2020



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitHub Actions
https://github.com/features/actions

GitHub Marketplace - Actions
https://github.com/marketplace?type=actions
Categories: Fusion Middleware

Deploying a Spring Boot application to Tanzu Application Service for Kubernetes using GitLab

Mon, 2020-06-15 20:44
In this demo I show how to deploy a simple Springboot application using GitLab pipeline onto Tanzu Application Service for Kubernetes (TAS4K8s).

Steps

Ensure you have Tanzu Application Service for Kubernetes (TAS4K8s) running as shown below
  
$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name Namespaces Lcs Lca
cf (cluster),build-service,cf-blobstore,cf-db, true 10d
cf-system,cf-workloads,cf-workloads-staging,
istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

Ensure you have GitLab running. In this example it's installed on a Kubernetes cluster but it doesn't have to be. All that matters here is that GitLab can access the API endpoint of your TAS4K8s install
  
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
gitlab gitlab 2 2020-05-15 13:22:15.470219 +1000 AEST deployed gitlab-3.3.4 12.10.5

1. First let's create a basic Springboot application with a simple RESTful endpoint as shown below. It's best to use the Spring Initializer to create this application. I simply used the web and lombok dependancies as shown below.

Note: Make sure you select java version 11.

Spring Initializer Web Interface


Using built in Spring Initializer in IntelliJ IDEA.


Here is my simple RESTful controller which simply output's todays date.
  
package com.example.demo;

import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.Date;

@RestController
@Slf4j
public class FrontEnd {
@GetMapping("/")
public String index () {
log.info("An INFO Message");
return new Date().toString();
}
}

2. Create an empty project in GitLab using the name "gitlab-TAS4K8s-boot-demo"



3. At this point this add our project files from step #1 above into the empty GitLab project repository. We do that as follows.

$ cd "existing project folder from step #1"
$ git init
$ git remote add origin http://gitlab.ci.run.haas-236.pez.pivotal.io/root/gitlab-tas4k8s-boot-demo.git
$ git add .
$ git commit -m "Initial commit"
$ git push -u origin master

Once done we now have out GitLab project repository with the files we created as part of the project setup


4. It's always worth running the code locally just to make sure it's working so if you like you can do that as follows

RUN:

$ ./mvnw spring-boot:run

CURL:

$ curl http://localhost:8080/
Tue Jun 16 10:46:26 AEST 2020

HTTPie:

papicella@papicella:~$
papicella@papicella:~$
papicella@papicella:~$ http :8080/
HTTP/1.1 200
Connection: keep-alive
Content-Length: 29
Content-Type: text/plain;charset=UTF-8
Date: Tue, 16 Jun 2020 00:46:40 GMT
Keep-Alive: timeout=60

Tue Jun 16 10:46:40 AEST 2020

5. Our GitLab project as no pipelines defined so let's create one as follows in the project root directory using the default pipeline name ".gitlab-ci.yml"

image: openjdk:11-jdk

stages:
  - build
  - deploy

build:
  stage: build
  script: ./mvnw package
  artifacts:
    paths:
      - target/demo-0.0.1-SNAPSHOT.jar

production:
  stage: deploy
  script:
  - curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
  - ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation
  - ./cf auth $CF_USERNAME $CF_PASSWORD
  - ./cf target -o apples-org -s development
  - ./cf push -f manifest.yaml
  only:
  - master


Note: We have not defined any tests in our pipeline which we should do but we haven't written any in this example.

6. For this pipeline to work we will need to do the following

- Add a manifest.yaml file in the project root to deploy our simple Springboot RESTful application

---
applications:
  - name: gitlab-TAS4K8s-boot-demo
    memory: 1024M
    instances: 1
    path: ./target/demo-0.0.1-SNAPSHOT.jar

- Alter the API endpoint to match your TAS4K8s endpoint

- ./cf api https://api.system.run.haas-236.pez.pivotal.io --skip-ssl-validation

- Alter the target to use your ORG and SPACE within TAs4K8s.

- ./cf target -o apples-org -s development

This command shows you what your current CF CLI is targeted to so you can ensure you edit it with correct details
  
$ cf target
api endpoint: https://api.system.run.haas-236.pez.pivotal.io
api version: 2.150.0
user: pas
org: apples-org
space: development

7. For the ".gitlab-ci.yml" to work we need to define two ENV variables for our username and password. Those two are as follows which is our login credentials to TAS4K8s

  • CF_USERNAME 
  • CF_PASSWORD

To do that we need to navigate to "Project Settings -> CI/CD - Variables" and fill in the appropriate details as shown below



8. Now let's add the two new files using git , add a commit message and push the changes

$ git add .gitlab-ci.yml
$ git add manifest.yaml
git commit -m "add pipeline configuration"
$ git push -u origin master

9. Navigate to GitLab UI "CI/CD -> Pipelines" and we should see our pipeline starting to run








10. If everything went well!!!



11. Finally our application will be deployed to TAS4K8s as shown below
  
$ cf apps
Getting apps in org apples-org / space development as pas...
OK

name requested state instances memory disk urls
gitlab-TAS4K8s-boot-demo started 1/1 1G 1G gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
gitlab-tas4k8s-demo started 1/1 1G 1G gitlab-tas4k8s-demo.apps.system.run.haas-236.pez.pivotal.io
test-node-app started 1/1 1G 1G test-node-app.apps.system.run.haas-236.pez.pivotal.io

$ cf app gitlab-TAS4K8s-boot-demo
Showing health and status for app gitlab-TAS4K8s-boot-demo in org apples-org / space development as pas...

name: gitlab-TAS4K8s-boot-demo
requested state: started
isolation segment: placeholder
routes: gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
last uploaded: Tue 16 Jun 11:29:03 AEST 2020
stack:
buildpacks:

type: web
instances: 1/1
memory usage: 1024M
state since cpu memory disk details
#0 running 2020-06-16T01:29:16Z 0.1% 118.2M of 1G 0 of 1G

12. Access it as follows.

$ http http://gitlab-tas4k8s-boot-demo.apps.system.run.haas-236.pez.pivotal.io
HTTP/1.1 200 OK
content-length: 28
content-type: text/plain;charset=UTF-8
date: Tue, 16 Jun 2020 01:35:28 GMT
server: istio-envoy
x-envoy-upstream-service-time: 198

Tue Jun 16 01:35:28 GMT 2020

Of course if you wanted to create an API like service you could use the source code at this repo rather then the simple demo shown here using OpenAPI.

https://github.com/papicella/spring-book-service



More Information

Download TAS4K8s
https://network.pivotal.io/products/tas-for-kubernetes/

GitLab
https://about.gitlab.com/
Categories: Fusion Middleware

Installing a UI for Tanzu Application Service for Kubernetes

Thu, 2020-06-04 23:18
Having installed Tanzu Application Service for Kubernetes a few times having a UI is something I must have. In this post I show how to get Stratos deployed and running on Tanzu Application Service for Kubernetes (TAS4K8s) beta 0.2.0.

Steps

Note: It's assumed you have TAS4K8s deployed and running as per the output of "kapp" 

$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name  Namespaces                                    Lcs   Lca
cf    (cluster),build-service,cf-blobstore,cf-db,   true  2h
      cf-system,cf-workloads,cf-workloads-staging,
      istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

1. First let's create a namespace to install Stratos into.

$ kubectl create namespace console
namespace/console created

2. Using helm 3 install Stratos as shown below.

$ helm install my-console --namespace=console stratos/console --set console.service.type=LoadBalancer
NAME: my-console
LAST DEPLOYED: Fri Jun  5 13:18:22 2020
NAMESPACE: console
STATUS: deployed
REVISION: 1
TEST SUITE: None

3. You can verify it installed correctly a few ways as shown below

- Check using "helm ls -A"
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-console console 1 2020-06-05 13:18:22.785689 +1000 AEST deployed console-3.2.1 3.2.1
- Check everything in the namespace "console" is up and running
$ kubectl get all -n console
NAME READY STATUS RESTARTS AGE
pod/stratos-0 2/2 Running 0 34m
pod/stratos-config-init-1-mxqbw 0/1 Completed 0 34m
pod/stratos-db-7fc9b7b6b7-sp4lf 1/1 Running 0 34m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-console-mariadb ClusterIP 10.100.200.65 <none> 3306/TCP 34m
service/my-console-ui-ext LoadBalancer 10.100.200.216 10.195.75.164 443:32286/TCP 34m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/stratos-db 1/1 1 1 34m

NAME DESIRED CURRENT READY AGE
replicaset.apps/stratos-db-7fc9b7b6b7 1 1 1 34m

NAME READY AGE
statefulset.apps/stratos 1/1 34m

NAME COMPLETIONS DURATION AGE
job.batch/stratos-config-init-1 1/1 28s 34m
4. To invoke the UI run a script as follows.

Script:

export IP=`kubectl -n console get service my-console-ui-ext -ojsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo "Stratos URL: https://$IP:443"
echo ""

Output:

$ ./get-stratos-url.sh

Stratos URL: https://10.195.75.164:443

5. Invoking the URL above will take you to a screen as follows where you would select "Local Admin" account



6. Set a password and click "Finish" button


7. At this point we need to get an API endpoint for our TAS4K8s install. Easiest way to get that is to run a command as follows when logged in using the CF CLI as follows

$ cf api
api endpoint:   https://api.system.run.haas-236.pez.pivotal.io
api version:    2.150.0

8. Click on the "Register an Endpoint" + button as shown below


9. Select "Cloud Foundry" as the type you wish to register.

10. Enter details as shown below and click on "Register" button.
 


11. At this point you should connect to Cloud Foundry using your admin credentials for the TAS4K8s instance as shown below.


12. Once connected your good to go and start deploying some applications. 




Categories: Fusion Middleware

Targeting specific namespaces with kubectl

Mon, 2020-06-01 00:45
Note for myself given kubectl does not allow multiple namespaces as per it's CLI

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get pod;'

OR (get all) if you want to see all resources

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get all;'
  
$ eval 'kubectl --namespace='{cf-system,kpack,istio-system}' get pod;'
NAME READY STATUS RESTARTS AGE
ccdb-migrate-995n7 0/2 Completed 1 3d23h
cf-api-clock-7595b76c78-94trp 2/2 Running 2 3d23h
cf-api-deployment-updater-758f646489-k5498 2/2 Running 2 3d23h
cf-api-kpack-watcher-6fb8f7b4bf-xh2mg 2/2 Running 0 3d23h
cf-api-server-5dc58fb9d-8d2nc 5/5 Running 5 3d23h
cf-api-server-5dc58fb9d-ghwkn 5/5 Running 4 3d23h
cf-api-worker-7fffdbcdc7-fqpnc 2/2 Running 2 3d23h
cfroutesync-75dff99567-kc8qt 2/2 Running 0 3d23h
eirini-5cddc6d89b-57dgc 2/2 Running 0 3d23h
fluentd-4fsp8 2/2 Running 2 3d23h
fluentd-5vfnv 2/2 Running 1 3d23h
fluentd-gq2kr 2/2 Running 2 3d23h
fluentd-hnjgm 2/2 Running 2 3d23h
fluentd-j6d5n 2/2 Running 1 3d23h
fluentd-wbzcj 2/2 Running 2 3d23h
log-cache-7fd48cd767-fj9k8 5/5 Running 5 3d23h
metric-proxy-695797b958-j7tns 2/2 Running 0 3d23h
uaa-67bd4bfb7d-v72v6 2/2 Running 2 3d23h
NAME READY STATUS RESTARTS AGE
kpack-controller-595b8c5fd-x4kgf 1/1 Running 0 3d23h
kpack-webhook-6fdffdf676-g8v9q 1/1 Running 0 3d23h
NAME READY STATUS RESTARTS AGE
istio-citadel-589c85d7dc-677fz 1/1 Running 0 3d23h
istio-galley-6c7b88477-fk9km 2/2 Running 0 3d23h
istio-ingressgateway-25g8s 2/2 Running 0 3d23h
istio-ingressgateway-49txj 2/2 Running 0 3d23h
istio-ingressgateway-9qsqj 2/2 Running 0 3d23h
istio-ingressgateway-dlbcr 2/2 Running 0 3d23h
istio-ingressgateway-jdn42 2/2 Running 0 3d23h
istio-ingressgateway-jnx2m 2/2 Running 0 3d23h
istio-pilot-767fc6d466-8bzt8 2/2 Running 0 3d23h
istio-policy-66f4f99b44-qhw92 2/2 Running 1 3d23h
istio-sidecar-injector-6985796b87-2hvxw 1/1 Running 0 3d23h
istio-telemetry-d6599c76f-ps6xd 2/2 Running 1 3d23h
Categories: Fusion Middleware

Paketo Buildpacks - Cloud Native Buildpacks providing language runtime support for applications on Kubernetes or Cloud Foundry

Thu, 2020-05-07 05:10
Paketo Buildpacks are modular Buildpacks, written in Go. Paketo Buildpacks provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to make image builds easy, performant, and secure.

Paketo Buildpacks implement the Cloud Native Buildpacks specification, an emerging standard for building app container images. You can use Paketo Buildpacks with tools such as the CNB pack CLI, kpack, Tekton, and Skaffold, in addition to a number of cloud platforms.

Here how simple they are to use.

Steps

1. First to get started you need a few things installed the most important is is the Pack CLI and a Docker up and running to allow you to locally create OCI compliant images from your source code

Prerequisites:

    Pack CLI
    Docker

2. Verify pack is installed as follows

$ pack version
0.10.0+git-06d9983.build-259

3. Now in this example below I am going to use a Springboot application source code of mine. The Github URL for that is as follows so you could clone it if you want to follow using this demo.

https://github.com/papicella/msa-apifirst

4. Build my OCI compliant image as follows.

$ pack build msa-apifirst-paketo -p ./msa-apifirst --builder gcr.io/paketo-buildpacks/builder:base
base: Pulling from paketo-buildpacks/builder
Digest: sha256:1bb775a178ed4c54246ab71f323d2a5af0e4b70c83b0dc84f974694b0221d636
Status: Image is up to date for gcr.io/paketo-buildpacks/builder:base
base-cnb: Pulling from paketo-buildpacks/run
Digest: sha256:d70bf0fe11d84277997c4a7da94b2867a90d6c0f55add4e19b7c565d5087206f
Status: Image is up to date for gcr.io/paketo-buildpacks/run:base-cnb
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1
[detector] paketo-buildpacks/executable-jar    1.2.2
[detector] paketo-buildpacks/apache-tomcat     1.1.2
[detector] paketo-buildpacks/dist-zip          1.2.2
[detector] paketo-buildpacks/spring-boot       1.5.2
===> ANALYZING
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:openssl-security-provider" from app image
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:security-providers-configurer" from app image

...

[builder] Paketo Maven Buildpack 1.2.1
[builder]     Set $BP_MAVEN_SETTINGS to configure the contents of a settings.xml file. Default .
[builder]     Set $BP_MAVEN_BUILD_ARGUMENTS to configure the arguments passed to the build system. Default -Dmaven.test.skip=true package.
[builder]     Set $BP_MAVEN_BUILT_MODULE to configure the module to find application artifact in. Default .
[builder]     Set $BP_MAVEN_BUILT_ARTIFACT to configure the built application artifact. Default target/*.[jw]ar.
[builder]     Creating cache directory /home/cnb/.m2
[builder]   Compiled Application: Reusing cached layer
[builder]   Removing source code
[builder]
[builder] Paketo Executable JAR Buildpack 1.2.2
[builder]   Process types:
[builder]     executable-jar: java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     task:           java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     web:            java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]
[builder] Paketo Spring Boot Buildpack 1.5.2
[builder]   Image labels:
[builder]     org.opencontainers.image.title
[builder]     org.opencontainers.image.version
[builder]     org.springframework.boot.spring-configuration-metadata.json
[builder]     org.springframework.boot.version
===> EXPORTING
[exporter] Reusing layer 'launcher'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Reusing layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Reusing 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (726b340b596b):
[exporter]       index.docker.io/library/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:application'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:cache'
[exporter] Reusing cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image msa-apifirst-paketo

5. Now lets run our application locally as shown below

$ docker run --rm -p 8080:8080 msa-apifirst-paketo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=113348K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx423227K (Head Room: 0%, Loaded Class Count: 17598, Thread Count: 250, Total Memory: 1073741824)
Adding Security Providers to JVM

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.1.RELEASE)

2020-05-07 09:48:04.153  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Starting MsaApifirstApplication on 486f85c54667 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2020-05-07 09:48:04.160  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : No active profile set, falling back to default profiles: default

...

2020-05-07 09:48:15.515  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Started MsaApifirstApplication in 12.156 seconds (JVM running for 12.975)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.680  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.682  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.684  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.688  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=4, name=siena, status=inactive)

6. Access the API endpoint using curl or HTTPie as shown below

$ http :8080/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Thu, 07 May 2020 09:49:05 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://localhost:8080/customers/1"
        },
        "self": {
            "href": "http://localhost:8080/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

It also has a swagger UI endpoint as follows

http://localhost:8080/swagger-ui.html

7. Now you will see as per below you have a locally built OCI compliant image

$ docker images | grep msa-apifirst-paketo
msa-apifirst-paketo                       latest              726b340b596b        40 years ago        286MB

8. Now you can push this OCI compliant image to a Container Registry here I am using Dockerhub

$ pack build pasapples/msa-apifirst-paketo:latest --publish --path ./msa-apifirst
cflinuxfs3: Pulling from cloudfoundry/cnb
Digest: sha256:30af1eb2c8a6f38f42d7305acb721493cd58b7f203705dc03a3f4b21f8439ce0
Status: Image is up to date for cloudfoundry/cnb:cflinuxfs3
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1

...

===> EXPORTING
[exporter] Adding layer 'launcher'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Adding layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Adding 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (sha256:097c7f67ac3dfc4e83d53c6b3e61ada8dd3d2c1baab2eb860945eba46814dba5):
[exporter]       index.docker.io/pasapples/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Adding cache layer 'paketo-buildpacks/maven:application'
[exporter] Adding cache layer 'paketo-buildpacks/maven:cache'
[exporter] Adding cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image pasapples/msa-apifirst-paketo:latest

Dockerhub showing pushed OCI compliant image


9. If you wanted to deploy your application to Kubernetes you could do that as follows.

$ kubectl create deployment msa-apifirst-paketo --image=pasapples/msa-apifirst-paketo
$ kubectl expose deployment msa-apifirst-paketo --type=LoadBalancer --port=8080

10. Finally you can select from 3 different builders as per below. We used the "base" builder in our example above
  • gcr.io/paketo-buildpacks/builder:full-cf
  • gcr.io/paketo-buildpacks/builder:base
  • gcr.io/paketo-buildpacks/builder:tiny

More Information

Paketo Buildpacks
https://paketo.io/
Categories: Fusion Middleware

Creating my first Tanzu Kubernetes Grid 1.0 workload cluster on AWS

Tue, 2020-05-05 04:15
With Tanzu Kubernetes Grid you can run the same K8s across data center, public cloud and edge for a consistent, secure experience for all development teams. To find out more here is step by step to get this working on AWS which is one of the first 2 supported IaaS, the other being vSphere.

Steps

Before we get started we need to download a few bits and pieces all described here.

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-set-up-tkg.html

Once you done that make sure you have tkg cli as follows

$ tkg version
Client:
Version: v1.0.0
Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e

You will also need the following
  • kubectl is installed.
  • Docker is installed and running, if you are installing Tanzu Kubernetes Grid on Linux.
  • Docker Desktop is installed and running, if you are installing Tanzu Kubernetes Grid on Mac OS.
  • System time is synchronized with a Network Time Protocol (NTP) server
Once that is done follow this link for AWS pre-reqs and other downloads required

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws.html

1. Start by setting some AWS env variables for your account. Ensure you select a region supported by TKG which in my case I am using US regions

export AWS_ACCESS_KEY_ID=YYYY
export AWS_SECRET_ACCESS_KEY=ZZZZ
export AWS_REGION=us-east-1

2. Run the following clusterawsadm command to create a CloudFoundation stack.

$ ./clusterawsadm alpha bootstrap create-stack
Attempting to create CloudFormation stack cluster-api-provider-aws-sigs-k8s-io

Following resources are in the stack:

Resource                  |Type                                                                                |Status
AWS::IAM::Group           |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE
AWS::IAM::InstanceProfile |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::InstanceProfile |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::InstanceProfile |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/nodes.cluster-api-provider-aws.sigs.k8s.io         |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/controllers.cluster-api-provider-aws.sigs.k8s.io   |CREATE_COMPLETE
AWS::IAM::Role            |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::Role            |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::Role            |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::User            |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE

On AWS console you should see the stack created as follows


3. Ensure SSH key pair exists in your region as shown below

$ aws ec2 describe-key-pairs --key-name us-east-key
{
    "KeyPairs": [
        {
            "KeyFingerprint": "71:44:e3:f9:0e:93:1f:e7:1e:c4:ba:58:e8:65:92:3e:dc:e6:27:42",
            "KeyName": "us-east-key"
        }
    ]
}

4. Set Your AWS Credentials as Environment Variables for Use by Cluster API

$ export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)

$ export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)

$ export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)

$ export AWS_B64ENCODED_CREDENTIALS=$(./clusterawsadm alpha bootstrap encode-aws-credentials)

5. Set the correct AMI for your region.

List here: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/rn/VMware-Tanzu-Kubernetes-Grid-10-Release-Notes.html#amis

$ export AWS_AMI_ID=ami-0cdd7837e1fdd81f8

6. Deploy the Management Cluster to Amazon EC2 with the Installer Interface

$ tkg init --ui

Following the docs link below to fill in the desired details most of the defaults should work

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws-ui.html

Once complete:

$ ./tkg init --ui
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T091728980865562.log

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider aws:v0.5.2
Generating cluster configuration...
Setting up bootstrapper...
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Start creating management cluster...
Installing providers on management cluster...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Waiting for the management cluster to get ready for move...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster pasaws-tkg-man-cluster as 'pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster'.

Management cluster created!


You can now create your first workload cluster by running the following:

  tkg create cluster [name] --kubernetes-version=[version] --plan=[plan]


In AWS console EC2 instances page you will see a few VM's that represent the management cluster as shown below


7. Show the management cluster as follows

$ tkg get management-cluster
+--------------------------+-----------------------------------------------------+
| MANAGEMENT CLUSTER NAME  | CONTEXT NAME                                        |
+--------------------------+-----------------------------------------------------+
| pasaws-tkg-man-cluster * | pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster |
+--------------------------+-----------------------------------------------------+

8. You

9. You can connect to the management cluster as follows to look at what is running

$ kubectl config use-context pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster
Switched to context "pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster".

10. Deploy a Dev cluster with Multiple Worker Nodes as shown below. This should take about 10 minutes or so.

$ tkg create cluster apples-aws-tkg --plan=dev --worker-machine-count 2
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T101702293042678.log
Creating workload cluster 'apples-aws-tkg'...

Context set for workload cluster apples-aws-tkg as apples-aws-tkg-admin@apples-aws-tkg

Waiting for cluster nodes to be available...

Workload cluster 'apples-aws-tkg' created

In AWS console EC2 instances page you will see a few more VM's that represent our new TKG workload cluster


11. View what workload clusters are under management and have been created

$ tkg get clusters
+----------------+-------------+
| NAME           | STATUS      |
+----------------+-------------+
| apples-aws-tkg | Provisioned |
+----------------+-------------+

12. To connect to the workload cluster we just created use a set of commands as follows

$ tkg get credentials apples-aws-tkg
Credentials of workload cluster apples-aws-tkg have been saved
You can now access the cluster by switching the context to apples-aws-tkg-admin@apples-aws-tkg under /Users/papicella/.kube/config

$ kubectl config use-context apples-aws-tkg-admin@apples-aws-tkg
Switched to context "apples-aws-tkg-admin@apples-aws-tkg".

$ kubectl cluster-info
Kubernetes master is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443
KubeDNS is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The following link will also be helpful
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-tanzu-k8s-clusters-connect.html

18. View your cluster nodes as shown below
  
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-12.ec2.internal Ready <none> 6h24m v1.17.3+vmware.2
ip-10-0-0-143.ec2.internal Ready master 6h25m v1.17.3+vmware.2
ip-10-0-0-63.ec2.internal Ready <none> 6h24m v1.17.3+vmware.2

Now your ready to deploy workloads into your TKG workload cluster and or create as many clusters as you need. For more information use the links below.


More Information

VMware Tanzu Kubernetes Grid
https://tanzu.vmware.com/kubernetes-grid

VMware Tanzu Kubernetes Grid 1.0 Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-index.html


Categories: Fusion Middleware

Running Oracle 18c on a vSphere 7 using a Tanzu Kubernetes Grid Cluster

Sun, 2020-05-03 20:53
Previously I blogged about how to run stateful MySQL pod on vSphere 7 with Kubernetes. In this blog post we will do the same with Oracle Database Single Instance.

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes
http://theblasfrompas.blogspot.com/2020/04/creating-single-instance-stateful-mysql.html

For this blog we will use an Oracle single instance database version as follows [Oracle Database 18c (18.4.0) Express Edition (XE)], but could use any of the following if we wanted to. For a demo Oracle XE is all I need.
  • Oracle Database 19c (19.3.0) Enterprise Edition and Standard Edition 2
  • Oracle Database 18c (18.4.0) Express Edition (XE)
  • Oracle Database 18c (18.3.0) Enterprise Edition and Standard Edition 2
  • Oracle Database 12c Release 2 (12.2.0.2) Enterprise Edition and Standard Edition 2
  • Oracle Database 12c Release 1 (12.1.0.2) Enterprise Edition and Standard Edition 2
  • Oracle Database 11g Release 2 (11.2.0.2) Express Edition (XE)
Steps

1. First head to the following GitHub URL which contains sample Docker build files to facilitate installation, configuration, and environment setup for DevOps users. Clone it as shown below

$ git clone https://github.com/oracle/docker-images.git

2. Change to the directory as follows.

$ cd oracle/docker-images/OracleDatabase/SingleInstance/dockerfiles

3. Now ensure you have a local Docker Daemon running in my case I using Docker Desktop for Mac OSX. With that running let's build our docker image locally as shown below for the database [Oracle Database 18c (18.4.0) Express Edition (XE)]

$ ./buildDockerImage.sh -v 18.4.0 -x

....

.

  Oracle Database Docker Image for 'xe' version 18.4.0 is ready to be extended:

    --> oracle/database:18.4.0-xe

  Build completed in 1421 seconds.

4. View the image locally using "docker images"
  
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oracle/database 18.4.0-xe 3ec5d050b739 5 minutes ago 5.86GB
oraclelinux 7-slim f23503228fa1 2 weeks ago 120MB

5. Not really interested in running Oracle locally so let's push the built image to a Container Registry. In this case I am using Dockerhub

$ docker tag oracle/database:18.4.0-xe pasapples/oracle18.4.0-xe
$ docker push pasapples/oracle18.4.0-xe
The push refers to repository [docker.io/pasapples/oracle18.4.0-xe]
5bf989482a54: Pushed
899f9c386f90: Pushed
bc198e3a2f79: Mounted from library/oraclelinux
latest: digest: sha256:0dbbb906b20e8b052a5d11744a25e75edff07231980b7e110f45387e4956600a size: 951

Once done here in the image on Dockerhub



6. At this point we ready to deploy our Oracle Database 18c (18.4.0) Express Edition (XE). To do that we will use a Tanzu Kubernetes Grid cluster on vSphere 7. For an example of how that was created visit this blog post below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

We will be using a cluster called "tkg-cluster-1" as shown in vSphere client image below.


7. Ensure we have switched to the correct context here as shown below.

$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".

8. Now let's create a PVC for our Oracle database. Ensure you use a storage class name you have previously setup in my case thats "pacific-gold-storage-policy". You don't really need 80G for a demo with Oracle XE but given I had 2TB of storage I set it to be quite high.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: oracle-pv-claim
  annotations:
    pv.beta.kubernetes.io/gid: "54321"
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 80Gi

$ kubectl create -f oracle-pvc.yaml
persistentvolumeclaim/oracle-pv-claim created

$ kubectl describe pvc oracle-pv-claim
Name:          oracle-pv-claim
Namespace:     default
StorageClass:  pacific-gold-storage-policy
Status:        Bound
Volume:        pvc-385ee541-5f7b-4a10-95de-f8b35a24306f
Labels:       
Annotations:   pv.beta.kubernetes.io/gid: 54321
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      80Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:   
Events:
  Type    Reason                Age   From                                                                                                 Message
  ----    ------                ----  ----                                                                                                 -------
  Normal  ExternalProvisioning  49s   persistentvolume-controller                                                                          waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
  Normal  Provisioning          49s   csi.vsphere.vmware.com_vsphere-csi-controller-8446748d4d-qbjhn_acc32eab-845a-11ea-a597-baf3d8b74e48  External provisioner is provisioning volume for claim "default/oracle-pv-claim"

9. Now we are ready to create a Deployment YAML as shown below. Few things to note here as per the YAML below
  1. I am hard coding the password but normally I would use a k8s Secret to do this
  2. I needed to create a init-container which fixed a file system permission issue for me 
  3. I am running as the root user as per "runAsUser: 0" again for some reason the installation would not start if it didn't have root privileges
  4. I am using the PVC we created above "oracle-pv-claim"
  5. I want to expose port 1521 (database listener port) and 5500 (enterprise manager port) internally only for now as per the Service definition. 
Deployment YAML:


apiVersion: v1
kind: Service
metadata:
  name: oracle
spec:
  ports:
  - port: 1521
    name: dblistport
  - port: 5500
    name: emport
  selector:
    app: oracle
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: oracle
spec:
  selector:
    matchLabels:
      app: oracle
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: oracle
    spec:
      containers:
      - image: pasapples/oracle18.4.0-xe
        name: oracle
        env:
          # Use secret in real usage
        - name: ORACLE_PWD
          value: welcome1
        - name: ORACLE_CHARACTERSET
          value: AL32UTF8
        ports:
        - containerPort: 1521
          name: dblistport
        - containerPort: 5500
          name: emport
        volumeMounts:
        - name: oracle-persistent-storage
          mountPath: /opt/oracle/oradata
        securityContext:
          runAsUser: 0
          runAsGroup: 54321
      initContainers:
      - name: fix-volume-permission
        image: busybox
        command:
        - sh
        - -c
        - chown -R 54321:54321 /opt/oracle/oradata && chmod 777 /opt/oracle/oradata
        volumeMounts:
        - name: oracle-persistent-storage
          mountPath: /opt/oracle/oradata
      volumes:
      - name: oracle-persistent-storage
        persistentVolumeClaim:
          claimName: oracle-pv-claim

10. Apply the YAML as shown below

$ kubectl create -f oracle-deployment.yaml
service/oracle created
deployment.apps/oracle created

11. Wait for the oracle pod to be in a running state as shown below, this should happen rarely quickly
  
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-574b87c764-2zrp2 1/1 Running 0 11d
nginx-574b87c764-p8d45 1/1 Running 0 11d
oracle-77f6f7d567-sfd67 1/1 Running 0 36s

12. You can now monitor the pod as it starts to create the database instance for us using the "kubectl logs" command as shown below

$ kubectl logs oracle-77f6f7d567-sfd67 -f
ORACLE PASSWORD FOR SYS AND SYSTEM: welcome1
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password:
**********
Enter SYSTEM user password:
********
Enter PDBADMIN User Password:
**********
Prepare for db operation
7% complete
Copying database files
....

13. This will take some time but eventually it will have created / started the database instance for us as shown below

$ kubectl logs oracle-77f6f7d567-sfd67 -f
ORACLE PASSWORD FOR SYS AND SYSTEM: welcome1
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password:
**********
Enter SYSTEM user password:
********
Enter PDBADMIN User Password:
**********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: oracle-77f6f7d567-sfd67/XEPDB1
     Multitenant container database: oracle-77f6f7d567-sfd67
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
Pluggable database XEPDB1 opened read write
Completed: alter pluggable database XEPDB1 open
2020-05-04T00:59:32.719571+00:00
XEPDB1(3):CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/XE/XEPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
XEPDB1(3):Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/XE/XEPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
XEPDB1(3):ALTER DATABASE DEFAULT TABLESPACE "USERS"
XEPDB1(3):Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
2020-05-04T00:59:37.043341+00:00
ALTER PLUGGABLE DATABASE XEPDB1 SAVE STATE
Completed: ALTER PLUGGABLE DATABASE XEPDB1 SAVE STATE

14. The easiest way to test out our database instance is to "exec" into the pod and use SQLPlus as shown below

- Create a script as follows

export POD_NAME=`kubectl get pod -l app=oracle -o jsonpath="{.items[0].metadata.name}"`
kubectl exec -it $POD_NAME -- /bin/bash

- Execute the script to exec into the pod

$ ./exec-oracle-pod.sh
bash-4.2#

15. Now lets connect one of two ways given we have a Pluggable database instance also running
  
bash-4.2# sqlplus system/welcome1@XE

SQL*Plus: Release 18.0.0.0.0 - Production on Mon May 4 01:02:38 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.


Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL> exit
Disconnected from Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
bash-4.2# sqlplus system/welcome1@XEPDB1

SQL*Plus: Release 18.0.0.0.0 - Production on Mon May 4 01:03:20 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Last Successful login time: Mon May 04 2020 01:02:38 +00:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL>

18. Now let's  connect externally to the database which to do I could create a port forward of the Oracle database listener port as shown below. I have setup Oracle instant client using the URL as follows https://www.oracle.com/database/technologies/instant-client/macos-intel-x86-downloads.html

$ kubectl port-forward --namespace default oracle-77f6f7d567-sfd67 1521
Forwarding from 127.0.0.1:1521 -> 1521
Forwarding from [::1]:1521 -> 1521

Now login using SQLPlus directly from my Mac OSX terminal window

  
$ sqlplus system/welcome1@//localhost:1521/XEPDB1

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 4 11:43:05 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Mon May 04 2020 11:39:46 +10:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL>


17. We could also use Oracle Enterprise Manager which we would do as follows. We could create a k8s Service of type LoadBalancer as well but for now let's just do a simple port forward as per above

$ kubectl port-forward --namespace default oracle-77f6f7d567-sfd67 5500
Forwarding from 127.0.0.1:5500 -> 5500
Forwarding from [::1]:5500 -> 5500

18. Access Oracle Enterprise Manager as follows, ensuring you have Flash installed in your browser. I logged in using the "SYS" user as "SYSDBA"

https://localhost:5500/em

Once logged in:






And that's it you have Oracle 18c / Oracle Enterprise manager running on vSphere 7 with Kubernetes and can now start deploying some applications that use that Oracle instance as required.


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html


Categories: Fusion Middleware

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes

Mon, 2020-04-27 20:32
In the vSphere environment, the persistent volume objects are backed by virtual disks that reside on datastores. Datastores are represented by storage policies. After the vSphere administrator creates a storage policy, for example gold, and assigns it to a namespace in a Supervisor Cluster, the storage policy appears as a matching Kubernetes storage class in the Supervisor Namespace and any available Tanzu Kubernetes clusters.

In this example below we will show how to get a Single instance Stateful MySQL application pod on vSphere 7 with Kubernetes. For an introduction to vSphere 7 with Kubernetes see this blog link below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

Steps 

1. If you followed the Blog above you will have a Namespace as shown in the image below. The namespace we are using is called "ns1"



2. Click on "ns1" and ensure you have added storage using the "Storage" card



3. Now let's connect to our supervisor cluster and switch to the Namespace "ns1"

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

4. At this point we need to switch to the Namespace we created at step 2 which is "ns1".

$ kubectl config use-context ns1
Switched to context "ns1".

5. Use one of the following commands to verify that the storage class is the one which we added to the Namespace as per #2 above, in this case "pacific-gold-storage-policy".
  
$ kubectl get storageclass
NAME PROVISIONER AGE
pacific-gold-storage-policy csi.vsphere.vmware.com 5d20h

$ kubectl describe namespace ns1
Name: ns1
Labels: vSphereClusterID=domain-c8
Annotations: ncp/extpoolid: domain-c8:1d3e6bfb-af68-4494-a9bf-c8560a7a6aef-ippool-10-193-191-129-10-193-191-190
ncp/snat_ip: 10.193.191.141
ncp/subnet-0: 10.244.0.240/28
ncp/subnet-1: 10.244.1.16/28
vmware-system-resource-pool: resgroup-67
vmware-system-vm-folder: group-v68
Status: Active

Resource Quotas
Name: ns1-storagequota
Resource Used Hard
-------- --- ---
pacific-gold-storage-policy.storageclass.storage.k8s.io/requests.storage 20Gi 9223372036854775807

No resource limits.

As a DevOps engineer, you can use the storage class in your persistent volume claim specifications. You can then deploy an application that uses storage from the persistent volume claim.

6. At this point we can create a Persistent Volume Claim using YAML as follows. In the example below we reference storage class name ""pacific-gold-storage-policy".

Note: We are using a Supervisor Cluster Namespace here for our Stateful MySQL application but the storage class name will also appear in any Tanzu Kubernetes clusters you have created.

Example:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

7. Let's view the PVC we just created
  
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-a60f2787-ccf4-4142-8bf5-14082ae33403 20Gi RWO pacific-gold-storage-policy 39s

8. Now let's create a Deployment that will mount this PVC we created above using the name "mysql-pv-claim"

Example:

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

$ kubectl apply -f mysql-deployment.yaml
service/mysql created
deployment.apps/mysql created

9. Let's verify we have a running Deployment with a MySQL POD as shown below
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-c85f7f79c-gskkr 1/1 Running 0 78s
pod/nginx 1/1 Running 0 3d21h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mysql ClusterIP None <none> 3306/TCP 79s
service/tkg-cluster-1-60657ac113b7b5a0ebaab LoadBalancer 10.96.0.253 10.193.191.68 80:32078/TCP 5d19h
service/tkg-cluster-1-control-plane-service LoadBalancer 10.96.0.222 10.193.191.66 6443:30659/TCP 5d19h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql 1/1 1 1 79s

NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-c85f7f79c 1 1 1 79s

10. If we return to vSphere client we will see our MySQL Stateful deployment as shown below


11. We can also view the PVC we have created in vSphere client as well



12. Finally let's connect to the MySQL database which is done as follows by

$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
If you don't see a command prompt, try pressing enter.
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.47 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mysql               |
| performance_schema  |
+---------------------+
4 rows in set (0.02 sec)

mysql>


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html

Display Storage Classes in a Supervisor Namespace or Tanzu Kubernetes Cluster
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52.html#GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52
Categories: Fusion Middleware

A first look a running a Kubenetes cluster on "vSphere 7 with Kubernetes"

Wed, 2020-04-22 19:40
VMware recently announced the general availability of vSphere 7. Among many new features is the integration of Kubernetes into vSphere. In this blog post we will see what is required to create our first Kubernetes Guest cluster and deploy the simplest of workloads.



Steps

1. Log into the vCenter client and select "Menu -> Workload Management" and click on "Enable"

Full details on how to enable and setup the Supervisor Cluster can be found at the following docs

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-21ABC792-0A23-40EF-8D37-0367B483585E.html

Make sure you enable Harbor as the Registry using this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-AE24CF79-3C74-4CCD-B7C7-757AD082D86A.html

A pre-requisite for Workload Management is to have NSX-T 3.0 installed / enabled. https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html

Once all done the "Workload Management" page will look like this. This can take around 30 minutes to complete



2. As a vSphere administrator, you can create namespaces on a Supervisor Cluster and configure them with resource quotas, storage, as well as set permissions for DevOps engineer users. Once you configure a namespace, you can provide it DevOps engineers, who run vSphere Pods and Kubernetes clusters created through the VMware Tanzu™ Kubernetes Grid™ Service.

To do this follow this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-1544C9FE-0B23-434E-B823-C59EFC2F7309.html

Note: Make a note of this Namespace as we are going to need to connect to it shortly. In the examples below we have a namespace called "ns1"

3. With a vSphere namespace created we can now download the required CLI

Note: You can get the files from the Namespace summary page as shown below under the heading "Link to CLI Tools"



One downloaded put the contents of the .zip file in your OS's executable search path

4. Now we are ready to login. To do that we will use a command as follows

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-253.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

Full instructions are at the following URL

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-F5114388-1838-4B3B-8A8D-4AE17F33526A.html

5. At this point we need to switch to the Namespace we created at step 2 which is "ns1"

$ kubectl config use-context ns1
Switched to context "ns1".

6. Get a list of the available content images and the Kubernetes version that the image provides

Command: kubectl get virtualmachineimages
  
$ kubectl get virtualmachineimages
NAME AGE
ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd 35m

Version Information can be retrieved as follows:
  
$ kubectl describe virtualmachineimage ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Name: ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Namespace:
Labels: <none>
Annotations: vmware-system.compatibilityoffering:
[{"requires": {"k8s.io/configmap": [{"predicate": {"operation": "anyOf", "arguments": [{"operation": "not", "arguments": [{"operation": "i...
vmware-system.guest.kubernetes.addons.calico:
{"type": "inline", "value": "---\n# Source: calico/templates/calico-config.yaml\n# This ConfigMap is used to configure a self-hosted Calic...
vmware-system.guest.kubernetes.addons.pvcsi:
{"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: {{ .PVCSINamespace }}\n---\nkind: ServiceAccount\napiVers...
vmware-system.guest.kubernetes.addons.vmware-guest-cluster:
{"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: vmware-system-cloud-provider\n---\napiVersion: v1\nkind: ...
vmware-system.guest.kubernetes.distribution.image.version:
{"kubernetes": {"version": "1.16.8+vmware.1", "imageRepository": "vmware.io"}, "compatibility-7.0.0.10100": {"isCompatible": "true"}, "dis...
API Version: vmoperator.vmware.com/v1alpha1
Kind: VirtualMachineImage
Metadata:
Creation Timestamp: 2020-04-22T04:52:42Z
Generation: 1
Resource Version: 28324
Self Link: /apis/vmoperator.vmware.com/v1alpha1/virtualmachineimages/ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
UID: 9b2a8248-d315-4b50-806f-f135459801a8
Spec:
Image Source Type: Content Library
Type: ovf
Events: <none>


7. Create a YAML file with the required configuration parameters to define the cluster

Few things to note:
  1. Make sure your storageClass name matches the storage class name you used during setup
  2. Make sure your distribution version matches a name from the output of step 6
Example:

apiVersion: run.tanzu.vmware.com/v1alpha1               #TKG API endpoint
kind: TanzuKubernetesCluster                            #required parameter
metadata:
  name: tkg-cluster-1                                   #cluster name, user defined
  namespace: ns1                                        #supervisor namespace
spec:
  distribution:
    version: v1.16                                      #resolved kubernetes version
  topology:
    controlPlane:
      count: 1                                          #number of control plane nodes
      class: best-effort-small                          #vmclass for control plane nodes
      storageClass: pacific-gold-storage-policy         #storageclass for control plane
    workers:
      count: 3                                          #number of worker nodes
      class: best-effort-small                          #vmclass for worker nodes
      storageClass: pacific-gold-storage-policy         #storageclass for worker nodes

More information on what the goes into your YAML is defined here

Configuration Parameters for Provisioning Tanzu Kubernetes Clusters
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-4E68C7F2-C948-489A-A909-C7A1F3DC545F.html

8. Provision the Tanzu Kubernetes cluster using the following kubectl command against the manifest file above

Command: kubectl apply -f CLUSTER-NAME.yaml

While creating you can check the status as follows

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 15m running

NAME PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned

NAME PROVIDERID PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c provisioning

NAME AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 14m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 6m4s

9. Run the following command and make sure the Tanzu Kubernetes cluster is running, this may take some time.

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 18m running

NAME PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned

NAME PROVIDERID PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp vsphere://420ca6ec-9793-7f23-2cd9-67b46c4cc49d provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm vsphere://420c9dd0-4fee-deb1-5673-dabc52b822ca provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c vsphere://420cf11f-24e4-83dd-be10-7c87e5486f1c provisioned

NAME AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 18m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 9m59s

10. For a more concise view of what Tanzu Kubernetes Cluster you have this command with it's status is useful enough

Command: kubectl get tanzukubernetescluster
  
$ kubectl get tanzukubernetescluster
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 20m running

11. Now let's login to a Tanzu Kubernetes Cluster using it's name as follows

kubectl vsphere login --tanzu-kubernetes-cluster-name TKG-CLUSTER-NAME --vsphere-username VCENTER-SSO-USER --server SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS --insecure-skip-tls-verify

Example:

$ kubectl vsphere login --tanzu-kubernetes-cluster-name tkg-cluster-1 --vsphere-username administrator@vsphere.local --server wcp.haas-yyy.pez.pivotal.io --insecure-skip-tls-verify

Password:

Logged in successfully.

You have access to the following contexts:
   ns1
   tkg-cluster-1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

12. Let's switch to the correct context here which is our newly created Kubernetes cluster

$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".

13. If your applications fail to run with the error “container has runAsNonRoot and the image will run as root”, add the RBAC cluster roles from here:

https://github.com/dstamen/Kubernetes/blob/master/demo-applications/allow-runasnonroot-clusterrole.yaml

PSP (Pod Security Policy) is enabled by default in the Tanzu Kubernetes Clusters so a PSP policy needs to be applied prior to dropping a deployment on the cluster as shown above in the link

14. Now lets deploy a simple nginx deployment using the YAML file

apiVersion: v1
kind: Service
metadata:
  labels:
    name: nginx
  name: nginx
spec:
  ports:
    - port: 80
  selector:
    app: nginx
  type: LoadBalancer

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

15. Apply the YAML config to create the Deployment

$ kubectl create -f nginx-deployment.yaml
service/nginx created
deployment.apps/nginx created

16. Verify everything was deployed successfully as shown below
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-574b87c764-2zrp2 1/1 Running 0 74s
pod/nginx-574b87c764-p8d45 1/1 Running 0 74s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
service/nginx LoadBalancer 10.111.0.106 10.193.191.68 80:31921/TCP 75s
service/supervisor ClusterIP None <none> 6443/TCP 29m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 2/2 2 2 75s

NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-574b87c764 2 2 2 75s

To access NGINX use the the external IP address of the service "service/nginx" on port 80



17. Finally lets return to vSphere client and see where our Tanzu Kubernetes Cluster we created exists. It will be inside the vSphere namespace "ns1" which os where we drove our install of the Tanzu Kubernetes Cluster from.





More Information

Introducing vSphere 7: Modern Applications & Kubernetes
https://blogs.vmware.com/vsphere/2020/03/vsphere-7-kubernetes-tanzu.html

How to Get vSphere with Kubernetes
https://blogs.vmware.com/vsphere/2020/04/how-to-get-vsphere-with-kubernetes.html

vSphere with Kubernetes 101 Whitepaper
https://blogs.vmware.com/vsphere/2020/03/vsphere-with-kubernetes-101.html



Categories: Fusion Middleware

Ever wondered if Cloud Foundry can run on Kubernetes?

Wed, 2020-04-15 23:36
Well yep it's possible now and is available to be tested now as per the repo below. In this post we will show what we can do with cf-for-k8s as it stands now, once installed and some requirements on how to install it.

https://github.com/cloudfoundry/cf-for-k8s

Before we get started it's important to note, this taken directly from the GitHub repo itself.

"This is a highly experimental project to deploy the new CF Kubernetes-centric components on Kubernetes. It is not meant for use in production and is subject to change in the future"

Steps

1. First we need a k8s cluster. I am using k8s on vSphere using VMware Enterprise PKS but you can use GKE or any other cluster that supports the minimum requirements.

To deploy cf-for-k8s as is, the cluster should:
  • be running version 1.14.x, 1.15.x, or 1.16.x
  • have a minimum of 5 nodes
  • have a minimum of 3 CPU, 7.5GB memory per node
2. There are also some IaaS requirements as shown below.



  • Supports LoadBalancer services
  • Defines a default StorageClass 


  • 3. Finally requirements for pushing source-code based apps to Cloud Foundry means we need a OCI compliant registry. I am using GCR but Docker Hub also works.

    Under the hood, cf-for-k8s uses Cloud Native buildpacks to detect and build the app source code into an oci compliant image and pushes the app image to the registry. Though cf-for-k8s has been tested with Google Container Registry and Dockerhub.com, it should work for any external OCI compliant registry.

    So if you like me and using GCR and following along you will need to create an IAM account with storage privileges for GCR. Assuming you want to create a new IAM account on GCP follow these steps ensuring you set your GCP project id as shown below

    $ export GCP_PROJECT_ID={project-id-in-gcp}

    $ gcloud iam service-accounts create push-image

    $ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --role roles/storage.admin

    $ gcloud iam service-accounts keys create \

      --iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
      gcr-storage-admin.json

    4. So to install cf-for-k8s we simply follow the detailed steps below.

    https://github.com/cloudfoundry/cf-for-k8s/blob/master/docs/deploy.md

    Note: We are using GCR so the generate values script we run looks as follows which injects our GCR IAM account keys into the YML file if we performed the step above 

    $ ./hack/generate-values.sh -d DOMAIN -g ./gcr-push-storage-admin.json > /tmp/cf-values.yml

    5. So in about 8 minutes or so you should have Cloud Foundry running on your Kubernetes cluster. Let's run a series of commands to verify that.

    - Here we see a set of Cloud Foundry namespaces named "cf-{name}"
      
    $ kubectl get ns
    NAME STATUS AGE
    cf-blobstore Active 8d
    cf-db Active 8d
    cf-system Active 8d
    cf-workloads Active 8d
    cf-workloads-staging Active 8d
    console Active 122m
    default Active 47d
    istio-system Active 8d
    kpack Active 8d
    kube-node-lease Active 47d
    kube-public Active 47d
    kube-system Active 47d
    metacontroller Active 8d
    pks-system Active 47d
    vmware-system-tmc Active 12d

    - Let's check the Cloud Foundry system is up and running by inspecting the status of the PODS as shown below
      
    $ kubectl get pods -n cf-system
    NAME READY STATUS RESTARTS AGE
    capi-api-server-6d89f44d5b-krsck 5/5 Running 2 8d
    capi-api-server-6d89f44d5b-pwv4b 5/5 Running 2 8d
    capi-clock-6c9f6bfd7-nmjrd 2/2 Running 0 8d
    capi-deployment-updater-79b4dc76-g2x6s 2/2 Running 0 8d
    capi-kpack-watcher-6c67984798-2x5n2 2/2 Running 0 8d
    capi-worker-7f8d499494-cd8fx 2/2 Running 0 8d
    cfroutesync-6fb9749-cbv6w 2/2 Running 0 8d
    eirini-6959464957-25ttx 2/2 Running 0 8d
    fluentd-4l9ml 2/2 Running 3 8d
    fluentd-mf8x6 2/2 Running 3 8d
    fluentd-smss9 2/2 Running 3 8d
    fluentd-vfzhl 2/2 Running 3 8d
    fluentd-vpn4c 2/2 Running 3 8d
    log-cache-559846dbc6-p85tk 5/5 Running 5 8d
    metric-proxy-76595fd7c-x9x5s 2/2 Running 0 8d
    uaa-79d77dbb77-gxss8 2/2 Running 2 8d

    - Lets view the ingress gateway resources in the namespace "
      
    $ kubectl get all -n istio-system
    NAME READY STATUS RESTARTS AGE
    pod/istio-citadel-bc7957fc4-nn8kx 1/1 Running 0 8d
    pod/istio-galley-6478b6947d-6dl9h 2/2 Running 0 8d
    pod/istio-ingressgateway-fcgvg 2/2 Running 0 8d
    pod/istio-ingressgateway-jzkpj 2/2 Running 0 8d
    pod/istio-ingressgateway-ptjzz 2/2 Running 0 8d
    pod/istio-ingressgateway-rtwk4 2/2 Running 0 8d
    pod/istio-ingressgateway-tvz8p 2/2 Running 0 8d
    pod/istio-pilot-67955bdf6f-nrhzp 2/2 Running 0 8d
    pod/istio-policy-6b786c6f65-m7tj5 2/2 Running 3 8d
    pod/istio-sidecar-injector-5669cc5894-tq55v 1/1 Running 0 8d
    pod/istio-telemetry-77b745cd6b-wn2dx 2/2 Running 3 8d

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/istio-citadel ClusterIP 10.100.200.216 <none> 8060/TCP,15014/TCP 8d
    service/istio-galley ClusterIP 10.100.200.214 <none> 443/TCP,15014/TCP,9901/TCP,15019/TCP 8d
    service/istio-ingressgateway LoadBalancer 10.100.200.105 10.195.93.142 15020:31515/TCP,80:31666/TCP,443:30812/TCP,15029:31219/TCP,15030:31566/TCP,15031:30615/TCP,15032:30206/TCP,15443:32555/TCP 8d
    service/istio-pilot ClusterIP 10.100.200.182 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 8d
    service/istio-policy ClusterIP 10.100.200.98 <none> 9091/TCP,15004/TCP,15014/TCP 8d
    service/istio-sidecar-injector ClusterIP 10.100.200.160 <none> 443/TCP 8d
    service/istio-telemetry ClusterIP 10.100.200.5 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 8d

    NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    daemonset.apps/istio-ingressgateway 5 5 5 5 5 <none> 8d

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/istio-citadel 1/1 1 1 8d
    deployment.apps/istio-galley 1/1 1 1 8d
    deployment.apps/istio-pilot 1/1 1 1 8d
    deployment.apps/istio-policy 1/1 1 1 8d
    deployment.apps/istio-sidecar-injector 1/1 1 1 8d
    deployment.apps/istio-telemetry 1/1 1 1 8d

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/istio-citadel-bc7957fc4 1 1 1 8d
    replicaset.apps/istio-galley-6478b6947d 1 1 1 8d
    replicaset.apps/istio-pilot-67955bdf6f 1 1 1 8d
    replicaset.apps/istio-policy-6b786c6f65 1 1 1 8d
    replicaset.apps/istio-sidecar-injector-5669cc5894 1 1 1 8d
    replicaset.apps/istio-telemetry-77b745cd6b 1 1 1 8d

    NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
    horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot 0%/80% 1 5 1 8d
    horizontalpodautoscaler.autoscaling/istio-policy Deployment/istio-policy 2%/80% 1 5 1 8d
    horizontalpodautoscaler.autoscaling/istio-telemetry Deployment/istio-telemetry 7%/80% 1 5 1 8d

    You can use kapp to verify your install as follows:

    $ kapp list
    Target cluster 'https://cfk8s.mydomain:8443' (nodes: 46431ba8-2048-41ea-a5c9-84c3a3716f6e, 4+)

    Apps in namespace 'default'

    Name  Label                                 Namespaces                                                                                                  Lcs   Lca
    cf    kapp.k14s.io/app=1586305498771951000  (cluster),cf-blobstore,cf-db,cf-system,cf-workloads,cf-workloads-staging,istio-system,kpack,metacontroller  true  8d

    Lcs: Last Change Successful
    Lca: Last Change Age

    1 apps

    Succeeded

    6. Now Cloud Foundry is running we need to configure DNS on your IaaS provider to point the wildcard subdomain of your system domain and the wildcard subdomain of all apps domains to point to external IP of the Istio Ingress Gateway service. You can retrieve the external IP of this service by running a command as follows

    $ kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

    Note: The DNS A record wildcard entry would look as follows ensuring you use the DOMAIN you told the install script you were using

    DNS entry should be mapped to : *.{DOMAIN}

    7. Once done we can use DIG to verify we have setup our DNS wildcard entry correct. We looking for a ANSWER section which maps to the IP address we got from

    $ dig api.mydomain

    ; <<>> DiG 9.10.6 <<>> api.mydomain
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- 58127="" font="" id:="" noerror="" opcode:="" query="" status:="">
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;api.mydomain. IN A

    ;; ANSWER SECTION:
    api.mydomain. 60 IN A 10.0.0.1

    ;; Query time: 216 msec
    ;; SERVER: 10.10.6.6#53(10.10.6.7)
    ;; WHEN: Thu Apr 16 11:46:59 AEST 2020
    ;; MSG SIZE  rcvd: 83

    8. So now we are ready to login using Cloud Foundry CLI. Make sure your using the latest version as shown below

    $ cf version
    cf version 6.50.0+4f0c3a2ce.2020-03-03

    Note: You can install Cloud Foundry CLI as follows

    https://github.com/cloudfoundry/cli

    9. Ok so we are ready to target the API endpoint and login. As you may as guessed the API endpoint is "api.{DOMNAIN" so go ahead and do that as shown below. If this fails it means you have to re-visit steps 6 and 7 above.

    $ cf api https://api.mydomain --skip-ssl-validation
    Setting api endpoint to https://api.mydomain...
    OK

    api endpoint:   https://api.mydomain
    api version:    2.148.0

    10. So now we need the admin password to login using UAA and this was generated for us when we run the generate script above and produced our install YML. You can run a simple command as follows using the YML file to get the password.

    $ head cf-values.yml
    #@data/values
    ---
    system_domain: "mydomain"
    app_domains:
    #@overlay/append
    - "mydomain"
    cf_admin_password: 5nxm5bnl23jf5f0aivbs

    cf_blobstore:
      secret_key: 04gihynpr0x4dpptc5a5

    11. So to login I use a script as follows which will create a space for me which I then target to applications into.

    cf auth admin 5nxm5bnl23jf5f0aivbs
    cf target -o system
    cf create-space development
    cf target -s development

    Output when we run this script or just type each command one at a time will look as follows.

    API endpoint: https://api.mydomain
    Authenticating...
    OK

    Use 'cf target' to view or set your target org and space.
    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development
    Creating space development in org system as admin...
    OK

    Space development already exists

    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development

    12. If we type in "cf apps" we will see we have no applications deployed which is expected.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    No apps found

    13. So lets deploy out first application. In this example we will use a NodeJS cloud foundry application which exists at the following GitHub repo. We will deploy it using it's source code only. To do that we will clone it onto our file system as shown below.

    https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    $ git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    14. Edit cf-sample-app-nodejs/manifest.yml to look as follows by removing radom-route entry

    ---
    applications:
    - name: cf-nodejs
      memory: 512M
      instances: 1

    15. Now to push the Node app we are going to use two terminal windows. One to actually push the app and the other to view the logs.


    16. Now in first terminal window issue this command ensuring the cloned app from above exists from the directory your in as shown by the path it's referencing

    $ cf push test-node-app -p ./cf-sample-app-nodejs

    17. In the second terminal window issue this command.

    $ cf logs test-node-app

    18. You should see log output while the application is being pushed.



    19. Wait for the "cf push" to complete as shown below

    ....

    Waiting for app to start...

    name:                test-node-app
    requested state:     started
    isolation segment:   placeholder
    routes:              test-node-app.system.run.haas-210.pez.pivotal.io
    last uploaded:       Thu 16 Apr 13:04:59 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T03:05:13Z   0.0%   0 of 1G   0 of 1G


    Verify we have deployed our Node app and it has a fully qualified URL for us to access it as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           1/1         1G       1G     test-node-app.mydomain

    ** Browser **



    Ok so what actually has happened on our k8s cluster to get this application deployed? There was a series of steps performed which is why "cf push" blocks until all these have happened. At a high level these are the 3 main steps
    1. Capi uploads the code, puts it in internal blob store
    2. kpack builds the image and stores in the registry you defined at install time (GCR for us)
    3. Eirini schedules the pod

    GCR "cf-workloads" folder


    kpack is where lots of magic actually occurs. kpack is based on the CNCF sandbox project knows as Cloud Native Buildpacks and can create OCI compliant images from source code and/or artifacts automatically for you. CNB/kpack doesn't just stop there to find out more I suggest going to the following links.

    https://tanzu.vmware.com/content/blog/introducing-kpack-a-kubernetes-native-container-build-service

    https://buildpacks.io/

    Buildpacks provide a higher-level abstraction for building apps compared to Dockerfiles.

    Specifically, buildpacks:
    • Provide a balance of control that reduces the operational burden on developers and supports enterprise operators who manage apps at scale.
    • Ensure that apps meet security and compliance requirements without developer intervention.
    • Provide automated delivery of both OS-level and application-level dependency upgrades, efficiently handling day-2 app operations that are often difficult to manage with Dockerfiles.
    • Rely on compatibility guarantees to safely apply patches without rebuilding artifacts and without unintentionally changing application behavior.
    20. Let's run a series of kubectl commands to see what was created. All of our apps get deployed to the namespace "cf-workloads".

    - What POD's are running in cf-workloads
      
    $ kubectl get pods -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    test-node-app-development-c346b24349-0 2/2 Running 0 26m

    - You will notice we have a POD running with 2 containers BUT also we have a Service which is used internally to route to the or more PODS using ClusterIP as shown below
      
    $ kubectl get svc -n cf-workloads
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 27m

    - Each POD has two containers named as follows.

    opi : This is your actual container instance running your code
    istio-proxy: This as the name suggests is a proxy container which among other things routes requests to the OPI container image when required

    21. Ok so let's scale our application to run 2 instances. To do that we simply use Cloud Foundry CLI as follows

    $ cf scale test-node-app -i 2
    Scaling app test-node-app in org system / space development as admin...
    OK

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    And using kubectl as expected we end up with another POD created for the second instance
      
    $ kubectl get pods -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    test-node-app-development-c346b24349-0 2/2 Running 0 44m
    test-node-app-development-c346b24349-1 2/2 Running 0 112s

    If we dig a bit deeper will see that a Statefulset backs the application deployment shown below
      
    $ kubectl get all -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    pod/test-node-app-development-c346b24349-0 2/2 Running 0 53m
    pod/test-node-app-development-c346b24349-1 2/2 Running 0 10m

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 53m

    NAME READY AGE
    statefulset.apps/test-node-app-development-c346b24349 2/2 53m

    Ok so as you may have guessed we can deploy many different types of apps because kpack supports multiple languages including Java, Go, Python etc.

    22. Let's deploy a Go application as follows.

    $ git clone https://github.com/swisscom/cf-sample-app-go

    $ cf push my-go-app -m 64M -p ./cf-sample-app-go
    Pushing app my-go-app to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:       my-go-app
      path:       /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/cf-sample-app-go
    + memory:     64M
      routes:
    +   my-go-app.mydomain

    Creating app my-go-app...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.43 KiB / 1.43 KiB [====================================================================================] 100.00% 1s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                my-go-app
    requested state:     started
    isolation segment:   placeholder
    routes:              my-go-app.mydomain
    last uploaded:       Thu 16 Apr 14:06:25 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   64M
         state     since                  cpu    memory     disk      details
    #0   running   2020-04-16T04:06:43Z   0.0%   0 of 64M   0 of 1G

    We can invoke the application using "curl" or something more modern like "HTTPie"

    $ http http://my-go-app.mydomain
    HTTP/1.1 200 OK
    content-length: 59
    content-type: text/plain; charset=utf-8
    date: Thu, 16 Apr 2020 04:09:46 GMT
    server: istio-envoy
    x-envoy-upstream-service-time: 6

    Congratulations! Welcome to the Swisscom Application Cloud!

    If we tailed the logs using "cf logs my-go-app" we would of seen that kpack intelligently determine this is a GO app and uses the Go buildpack to compile the code and produce a container image.

    ...
    2020-04-16T14:05:27.52+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Warning: Image "gcr.io/fe-papicella/cf-workloads/f0072cfa-0e7e-41da-9bf7-d34b2997fb94" not found
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Compiler Buildpack 0.0.83
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go 1.13.7: Contributing to layer
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Downloading from https://buildpacks.cloudfoundry.org/dependencies/go/go-1.13.7-bionic-5bb47c26.tgz
    2020-04-16T14:05:35.13+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Verifying checksum
    2020-04-16T14:05:35.63+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Expanding to /layers/org.cloudfoundry.go-compiler/go
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Mod Buildpack 0.0.84
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Setting environment variables
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    2020-04-16T14:05:41.68+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT github.com/swisscom/cf-sample-app-go
    2020-04-16T14:05:41.69+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    ...

    Using "cf apps" we now have two applications deployed as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    my-go-app       started           1/1         64M      1G     my-go-app.mydomain
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    23. Finally kpack and the buildpacks eco system can deploy already created artifacts. The Java Buildpack is capable of not only deploying from source but can also use a FAT spring boot JAR file for example as shown below. In this example we have packaged the artifact we wish to deploy as "PivotalMySQLWeb-1.0.0-SNAPSHOT.jar".

    $ cf push piv-mysql-web -p PivotalMySQLWeb-1.0.0-SNAPSHOT.jar -i 1 -m 1g
    Pushing app piv-mysql-web to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:        piv-mysql-web
      path:        /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/PivotalMySQLWeb-1.0.0-SNAPSHOT.jar
    + instances:   1
    + memory:      1G
      routes:
    +   piv-mysql-web.mydomain

    Creating app piv-mysql-web...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.03 MiB / 1.03 MiB [====================================================================================] 100.00% 2s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T04:17:43Z   0.0%   0 of 1G   0 of 1G


    Of course the usual commands you expect from CF CLI still exist. Here are some examples as follows.

    $ cf app piv-mysql-web
    Showing health and status for app piv-mysql-web in org system / space development as admin...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory         disk      details
    #0   running   2020-04-16T04:17:43Z   0.1%   195.8M of 1G   0 of 1G

    $ cf env piv-mysql-web
    Getting env variables for app piv-mysql-web in org system / space development as admin...
    OK

    System-Provided:

    {
     "VCAP_APPLICATION": {
      "application_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "application_name": "piv-mysql-web",
      "application_uris": [
       "piv-mysql-web.mydomain"
      ],
      "application_version": "750d9530-e756-4b74-ac86-75b61c60fe2d",
      "cf_api": "https://api. mydomain",
      "limits": {
       "disk": 1024,
       "fds": 16384,
       "mem": 1024
      },
      "name": "piv-mysql-web",
      "organization_id": "8ae94610-513c-435b-884f-86daf81229c8",
      "organization_name": "system",
      "process_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "process_type": "web",
      "space_id": "7f3d78ae-34d4-42e4-8ab8-b34e46e8ad1f",
      "space_name": "development",
      "uris": [
       "piv-mysql-web. mydomain"
      ],
      "users": null,
      "version": "750d9530-e756-4b74-ac86-75b61c60fe2d"
     }
    }

    No user-defined env variables have been set

    No running env variables have been set

    No staging env variables have been set

    So what about some sort of UI? That brings as to step 24

    24. Let's start by installing helm using a script as follows

    #!/usr/bin/env bash

    echo "install helm"
    # installs helm with bash commands for easier command line integration
    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
    # add a service account within a namespace to segregate tiller
    kubectl --namespace kube-system create sa tiller
    # create a cluster role binding for tiller
    kubectl create clusterrolebinding tiller \
        --clusterrole cluster-admin \
        --serviceaccount=kube-system:tiller

    echo "initialize helm"
    # initialized helm within the tiller service account
    helm init --service-account tiller
    # updates the repos for Helm repo integration
    helm repo update

    echo "verify helm"
    # verify that helm is installed in the cluster
    kubectl get deploy,svc tiller-deploy -n kube-system

    Once installed you can verify helm is working by using "helm ls" which should come back with no output as you haven't installed anything with helm yet.

    25. Run the following to install Stratos an open source Web UI for Cloud Foundry

    For more information on Stratos visit this URL - https://github.com/cloudfoundry/stratos

    $ helm install stratos/console --namespace=console --name my-console --set console.service.type=LoadBalancer
    NAME:   my-console
    LAST DEPLOYED: Thu Apr 16 09:48:19 2020
    NAMESPACE: console
    STATUS: DEPLOYED

    RESOURCES:
    ==> v1/Deployment
    NAME        READY  UP-TO-DATE  AVAILABLE  AGE
    stratos-db  0/1    1           0          2s

    ==> v1/Job
    NAME                   COMPLETIONS  DURATION  AGE
    stratos-config-init-1  0/1          2s        2s

    ==> v1/PersistentVolumeClaim
    NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
    console-mariadb                   Bound   pvc-4ff20e21-1852-445f-854f-894bc42227ce  1Gi       RWO           fast          2s
    my-console-encryption-key-volume  Bound   pvc-095bb7ed-7be9-4d93-b63a-a8af569361b6  20Mi      RWO           fast          2s

    ==> v1/Pod(related)
    NAME                         READY  STATUS             RESTARTS  AGE
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s

    ==> v1/Role
    NAME              AGE
    config-init-role  2s

    ==> v1/RoleBinding
    NAME                              AGE
    config-init-secrets-role-binding  2s

    ==> v1/Secret
    NAME                  TYPE    DATA  AGE
    my-console-db-secret  Opaque  5     2s
    my-console-secret     Opaque  5     2s

    ==> v1/Service
    NAME                TYPE          CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
    my-console-mariadb  ClusterIP     10.100.200.162           3306/TCP       2s
    my-console-ui-ext   LoadBalancer  10.100.200.171  10.195.93.143  443:31524/TCP  2s

    ==> v1/ServiceAccount
    NAME         SECRETS  AGE
    config-init  1        2s

    ==> v1/StatefulSet
    NAME     READY  AGE
    stratos  0/1    2s

    26. You can verify it installed a few ways as shown below.

    - Use helm with "helm ls"
      
    $ helm ls
    NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    my-console 1 Thu Apr 16 09:48:19 2020 DEPLOYED console-3.0.0 3.0.0 console

    - Verify everything is running using "kubectl get all -n console"
      
    $ k get all -n console
    NAME READY STATUS RESTARTS AGE
    pod/stratos-0 0/2 ContainerCreating 0 40s
    pod/stratos-config-init-1-2t47x 0/1 Completed 0 40s
    pod/stratos-db-69ddf7f5f7-gb8xm 0/1 Running 0 40s

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/my-console-mariadb ClusterIP 10.100.200.162 <none> 3306/TCP 40s
    service/my-console-ui-ext LoadBalancer 10.100.200.171 10.195.1.1 443:31524/TCP 40s

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/stratos-db 0/1 1 0 41s

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/stratos-db-69ddf7f5f7 1 1 0 41s

    NAME READY AGE
    statefulset.apps/stratos 0/1 41s

    NAME COMPLETIONS DURATION AGE
    job.batch/stratos-config-init-1 1/1 27s 42s

    27. Now to open up the UI web app we just need the external IP from "service/my-console-ui-ext" as per the output above.

    Navigate to https://{external-ip}:443

    28. Create a local user to login using the password you set and and the username "admin".

    Note: The password is just to get into the UI. It can be anything you want it to be.



    29. Now we need to click on "Endpoints" and register a Cloud Foundry endpoint using the same login details we used with the Cloud Foundry API earlier at step 11.

    Note: The API endpoint is what you used at step 9 and make sure to skip SSL validation

    Once connected there are our deployed applications.



    Summary 

    In this post we explored what running Cloud Foundry on Kubernetes looks like. For those familiar with Cloud Foundry or Tanzu Application Service (formally known as Pivotal Application Service) from a development perspective everything is the same using familiar CF CLI commands. What changes here is the footprint to run Cloud Foundry is much less complicated and runs on Kubernetes itself meaning even more places to run Cloud Foundry then ever before plus the ability to leverage community based projects on Kubernetes further more simplifying Cloud Foundry.

    For more information see the links below.

    More Information

    GitHub Repo
    https://github.com/cloudfoundry/cf-for-k8s

    VMware Tanzu Application Service for Kubernetes (Beta)
    https://network.pivotal.io/products/tas-for-kubernetes/
    Categories: Fusion Middleware

    Thank you kubie exactly what I needed

    Sun, 2020-04-05 22:59
    On average I deal with at least 5 different Kubernetes clusters so today when I saw / heard of kubie I had to install it.

    kubie is an alternative to kubectx, kubens and the k on prompt modification script. It offers context switching, namespace switching and prompt modification in a way that makes each shell independent from others

    Installing kubie right now involved download the release from the link below. Homebrew support is pending

    https://github.com/sbstp/kubie/releases

    Once added to your path it's as simple as this

    1. Check kubie is in your path

    $ which kubie
    /usr/local/bin/kubie

    2. Run "kubie ctx" as follows and select the "apples" k8s context

    papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie ctx



    [apples|default] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    3. Switch to a new namespace as shown below and watch how the PS1 prompt changes to indicate the k8s conext and new namespace we have set as result of the command below

    $ kubectl config set-context --current --namespace=vmware-system-tmc
    Context "apples" modified.

    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    4. Finally kubie exec is a subcommand that allows you to run commands inside of a context, a bit like kubectl exec allows you to run a command inside a pod. Here is some examples below
      
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie exec apples vmware-system-tmc kubectl get pods
    NAME READY STATUS RESTARTS AGE
    agent-updater-75f88b44f6-9f9jj 1/1 Running 0 2d23h
    agentupdater-workload-1586145240-kmwln 1/1 Running 0 3s
    cluster-health-extension-76d9b549b5-dlhms 1/1 Running 0 2d23h
    data-protection-59c88488bd-9wxk2 1/1 Running 0 2d23h
    extension-manager-8d69d95fd-sgksw 1/1 Running 0 2d23h
    extension-updater-77fdc4574d-fkcwb 1/1 Running 0 2d23h
    inspection-extension-64857d4d95-nl76f 1/1 Running 0 2d23h
    intent-agent-6794bb7995-jmcxg 1/1 Running 0 2d23h
    policy-sync-extension-7c968c9dcd-x4jvl 1/1 Running 0 2d23h
    policy-webhook-779c6f6c6-ppbn6 1/1 Running 0 2d23h
    policy-webhook-779c6f6c6-r82h4 1/1 Running 1 2d23h
    sync-agent-d67f95889-qbxtb 1/1 Running 6 2d23h
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie exec apples default kubectl get pods
    NAME READY STATUS RESTARTS AGE
    pbs-demo-image-build-1-mnh6v-build-pod 0/1 Completed 0 2d23h
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    More Information

    Blog Page:
    https://blog.sbstp.ca/introducing-kubie/

    GitHub Page:
    https://github.com/sbstp/kubie
    Categories: Fusion Middleware

    VMware enterprise PKS 1.7 has just been released

    Thu, 2020-04-02 22:51
    VMware enterprise PKS 1.7 was just released. For details please review the release notes using the link below.

    https://docs.pivotal.io/pks/1-7/release-notes.html



    More Information

    https://docs.pivotal.io/pks/1-7/index.html


    Categories: Fusion Middleware

    kpack 0.0.6 and Docker Hub secret annotation change for Docker Hub

    Mon, 2020-03-02 16:53
    I decided to try out the 0.0.6 release of kpack and noticed a small change to how you define your registry credentials when using Docker Hub. If you fail to do this it will fail to use Docker Hub as your registry with errors as follows when trying to export the image.

    [export] *** Images (sha256:1335a241ab0428043a89626c99ddac8dfb2719b79743652e535898600439e80f):
    [export]       pasapples/pbs-demo-image:latest - UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]
    [export]       index.docker.io/pasapples/pbs-demo-image:b1.20200301.232548 - UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]
    [export] ERROR: failed to export: failed to write image to the following tags: [pasapples/pbs-demo-image:latest: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]],[index.docker.io/pasapples/pbs-demo-image:b1.20200301.232548: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]]

    Previously in kpack 0.0.5 you defined your Dockerhub registry as follows:

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: dockerhub
      annotations:
        build.pivotal.io/docker: index.docker.io
    type: kubernetes.io/basic-auth
    stringData:
      username: dockerhub-user
      password: ...

    Now with kpack 0.0.6 you need to define the "annotations" using an url with HTTPS and "/v1" appended to the end of the URL as shown below.

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: dockerhub
      annotations:
        build.pivotal.io/docker: https://index.docker.io/v1/
    type: kubernetes.io/basic-auth
    stringData:
      username: dockerhub-user
      password: ...

    More Information

    https://github.com/pivotal/kpack
    Categories: Fusion Middleware

    Nice new look and feel to spring.io web site!!!!

    Sun, 2020-02-16 22:06
    Seen the new look and feel for the spring.io? Worth a look

    https://spring.io/



    Categories: Fusion Middleware

    Taking VMware Tanzu Mission Control for a test drive this time creating a k8s cluster on AWS

    Tue, 2020-02-11 04:12
    Previously I blogged about how to use VMware Tanzu Mission Control (TMC) to attach to kubernetes clusters and in that example we used a GCP GKE cluster. That blog entry exists here

    Taking VMware Tanzu Mission Control for a test drive
    http://theblasfrompas.blogspot.com/2020/02/taking-tanzu-mission-control-for-test.html

    In this example we will use the "Create Cluster" button to create a new k8s cluster on AWS that will be managed by TMC for it's entire lifecycle.

    Steps

    Note: Before getting started you need to create a "Cloud Provider Account" and that is done using AWS as shown below. You can create one or more connected cloud provider accounts. Adding accounts allows you to start using VMware TMC to create clusters, add data protection, and much more



    1. Click on the "Clusters" on the left hand navigation bar

    2. In the right hand corner click the button "New Cluster" and select your cloud provider account on AWS as shown below


    3. Fill in the details of your new cluster as shown below ensuring you select the correct AWS region where your cluster will be created.



    4. Click Next

    5. In the next screen I am just going to select a Development control plane



    6. Click Next

    7. Edit the default-node-pool and add 2 worker nodes instead of just 1 as shown below



    8. Click "Create"

    9. This will take you to a screen where your cluster will create. This can take at least 20 minutes so be patient. Progress is shown as per below



    10. If we switch over to AWS console we will start to see some running instances and other cloud components being created as shown in the images below




    11. Eventually the cluster will create and you are taken to a summary screen for your cluster. It will take a few minutes for all "Agent and extensions health" to show up green so refresh the page serval times until all shows up green as per below.

    Note: This can take up to 10 minutes so be patient




    12. So to access this cluster using "kubectl" use the button "Access this Cluster" in the top right hand corner and it will take you to a screen as follows. Click the "Download kubeconfig file" and the "Tanzu Mission Control CLI" as you will need both those files and save them locally



    13. make the "tmc" CLI executable and save to your $PATH as shown below

    $ chmod +x tmc
    $ sudo mv tmc /usr/local/bin

    14. Access cluster using "kubectl" as follows
      
    $ kubectl --kubeconfig=./kubeconfig-pas-aws-cluster.yml get namespaces
    NAME STATUS AGE
    default Active 19m
    kube-node-lease Active 19m
    kube-public Active 19m
    kube-system Active 19m
    vmware-system-tmc Active 17m

    Note: You will be taken to a web page to authenticate and once that's done your good to go as shown below


    15. You can view the pods created to allows access from the TMC agent as follows
      
    $ kubectl --kubeconfig=./kubeconfig-pas-aws-cluster.yml get pods --namespace=vmware-system-tmc
    NAME READY STATUS RESTARTS AGE
    agent-updater-7b47c659d-8h2mh 1/1 Running 0 25m
    agentupdater-workload-1581415620-csz5p 0/1 Completed 0 35s
    data-protection-769994df65-6cgfh 1/1 Running 0 24m
    extension-manager-657b467c-k4fkl 1/1 Running 0 25m
    extension-updater-c76785dc9-vnmdl 1/1 Running 0 25m
    inspection-extension-79dcff47f6-7lm5r 1/1 Running 0 24m
    intent-agent-7bdf6c8bd4-kgm46 1/1 Running 0 24m
    policy-sync-extension-8648685fc7-shn5g 1/1 Running 0 24m
    policy-webhook-78f5699b76-bvz5f 1/1 Running 1 24m
    policy-webhook-78f5699b76-td74b 1/1 Running 0 24m
    sync-agent-84f5f8bcdc-mrc9p 1/1 Running 0 24m

    So if you got this far you now have attached a cluster and created a cluster from scratch all from VMware TMC and that's just the start.

    Soon I will show to add some policies to our cluster now we have them under management

    More Information

    Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos
    https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

    VMware Tanzu Mission Control
    https://cloud.vmware.com/tanzu-mission-control
    Categories: Fusion Middleware

    Taking VMware Tanzu Mission Control for a test drive

    Mon, 2020-02-10 19:53
    You may or may not have heard of Tanzu Mission Control (TMC) part of the new VMware Tanzu offering which will help you build, run and manage modern apps. To find out more about Tanzu Mission Control here is the Blog link on that.

    https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

    In this blog I show you how easily you can use TMC to monitor your existing k8s clusters. Keep in mind TMC can also create k8s clusters for you but here we will use the "Attach Cluster" part of TMC. Demo as follows

    1. Of course you will need access account on TMC which for this demo I already have. Once logged in you will see a home screen as follows



    2. In the right hand corner there is a "Attach Cluster" button click this to attach an existing cluster to TMC. Enter some cluster details , in this case I am attaching to a k8s cluster on GKE and giving it a name "pas-gke-cluster".


    3. Click the "Register" button which takes you to a screen which allows you to install the VMware Tanzu Mission Control agent. This is simply done by using "kubectl apply ..." on your k8s cluster which allows an agent to communicate back to TMC itself. Everything is created in a namespace called "vmware-system-tmc"



    4. Once you have run the "kubectl apply .." on your cluster you can verify the status of the pods and other components installed as follows

    $ kubectl get all --namespace=vmware-system-tmc

    Or you could just check the status of the various pods as shown below and assume everything else was created ok
      
    $ kubectl get pods --namespace=vmware-system-tmc
    NAME READY STATUS RESTARTS AGE
    agent-updater-67bb5bb9c6-khfwh 1/1 Running 0 74m
    agentupdater-workload-1581383460-5dsx9 0/1 Completed 0 59s
    data-protection-657d8bf96c-v627g 1/1 Running 0 73m
    extension-manager-857d46c6c-zfzbj 1/1 Running 0 74m
    extension-updater-6ddd9858cf-lr88r 1/1 Running 0 74m
    inspection-extension-789bb48b6-mnlqj 1/1 Running 0 73m
    intent-agent-cfb49d788-cq8tk 1/1 Running 0 73m
    policy-sync-extension-686c757989-jftjc 1/1 Running 0 73m
    policy-webhook-5cdc7b87dd-8shlp 1/1 Running 0 73m
    policy-webhook-5cdc7b87dd-fzz6s 1/1 Running 0 73m
    sync-agent-84bd6c7bf7-rtzcn 1/1 Running 0 73m

    5. Now at this point click on "Verify Connection" button to confirm the agent in your k8s cluster is able to communicate with TMC

    6. Now let's search for out cluster on the "Clusters" page as shown below



    7. Click on "pas-gke-cluster" and you will be taken to an Overview page as shown below. Ensure all green tick boxes are in place this may take a few minutes so refresh the page as needed



    8. So this being an empty cluster I will create a deployment with 2 pods so we can see how TMC shows this workload in the UI. These "kubectl commands" should work on any cluster as the image is on Docker Hub

    $ kubectl run pbs-deploy --image=pasapples/pbs-demo-image --replicas=2 --port=8080
    $ kubectl expose deployment pbs-deploy --type=LoadBalancer --port=80 --target-port=8080 --name=pbs-demo-service

    9. Test the workload (Although this isn't really required)

    $ echo "http://`kubectl get svc pbs-demo-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`/customers/1"
    http://104.197.202.165/customers/1

    $ http http://104.197.202.165/customers/1
    HTTP/1.1 200
    Content-Type: application/hal+json;charset=UTF-8
    Date: Tue, 11 Feb 2020 01:43:26 GMT
    Transfer-Encoding: chunked

    {
        "_links": {
            "customer": {
                "href": "http://104.197.202.165/customers/1"
            },
            "self": {
                "href": "http://104.197.202.165/customers/1"
            }
        },
        "name": "pas",
        "status": "active"
    }

    10. Back on the TMC UI click on workloads. You should see our deployment as per below


    11. Click on the deployment "pbs-deploy" to see the status of the pods created as part of the deployment replica set plus the YAML of the deployment itself


    12. Of course this is just scratching the surface but from the other tabs you can see the cluster nodes, namespaces and other information as required not just for your workloads but also for the cluster itself




    One thing to note here is when I attach a cluster as shown in this demo the life cycle of the cluster, for example upgrades, can't be managed / performed by TMC. In the next post I will show how "Create Cluster" will actually be able to control the life cycle of the cluster as well as this time TMC will actually create the cluster for us.

    Stay tuned!!!

    More Information

    Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos
    https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

    VMware Tanzu Mission Control
    https://cloud.vmware.com/tanzu-mission-control
    Categories: Fusion Middleware

    kubectl tree - A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences

    Sun, 2020-01-12 18:51
    A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences on them. To get started and install the plugin visit this page.

    https://github.com/ahmetb/kubectl-tree

    Install Steps

    Install as follows

    1. Create a script as follows

    install-krew.sh

    (
      set -x; cd "$(mktemp -d)" &&
      curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}" &&
      tar zxvf krew.tar.gz &&
      KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" &&
      "$KREW" install --manifest=krew.yaml --archive=krew.tar.gz &&
      "$KREW" update
    )

    2. Install as follows

    papicella@papicella:~/pivotal/software/krew$ ./install-krew.sh
    +++ mktemp -d
    ++ cd /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tmp.kliHlfYB
    ++ curl -fsSLO 'https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}'
    ++ tar zxvf krew.tar.gz
    x ./krew-darwin_amd64
    x ./krew-linux_amd64
    x ./krew-linux_arm
    x ./krew-windows_amd64.exe
    x ./LICENSE
    +++ uname
    +++ tr '[:upper:]' '[:lower:]'
    ++ KREW=./krew-darwin_amd64
    ++ ./krew-darwin_amd64 install --manifest=krew.yaml --archive=krew.tar.gz
    Installing plugin: krew
    Installed plugin: krew

    ...

    3. On a Mac add the following to your PATH and source your profile file or start a new shell

    export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

    4. Check plugin is installed

    $ kubectl plugin list
    The following compatible plugins are available:

    /Users/papicella/.krew/bin/kubectl-krew
    /Users/papicella/.krew/bin/kubectl-tree

    Can also use this:

    $ kubectl tree --help
    Show sub-resources of the Kubernetes object

    Usage:
      kubectl tree KIND NAME [flags]

    Examples:
      kubectl tree deployment my-app
      kubectl tree kservice.v1.serving.knative.dev my-app

    6. Ok now it's installed let's see what it shows / displays information about k8s objects and relationships on my cluster which has riff and knative installed

    $ kubectl tree deployment --namespace=knative-serving networking-istio
    NAMESPACE        NAME                                       READY  REASON  AGE
    knative-serving  Deployment/networking-istio                -              8d
    knative-serving  └─ReplicaSet/networking-istio-7fcd97cbf7   -              8d
    knative-serving    └─Pod/networking-istio-7fcd97cbf7-z4dc9  True           8d

    $ kubectl tree deployment --namespace=riff-system riff-build-controller-manager
    NAMESPACE    NAME                                                    READY  REASON  AGE
    riff-system  Deployment/riff-build-controller-manager                -              8d
    riff-system  └─ReplicaSet/riff-build-controller-manager-5d484d5fc4   -              8d
    riff-system    └─Pod/riff-build-controller-manager-5d484d5fc4-7rhbr  True           8d


    More Information

    GitHub Tree Plugin
    https://github.com/ahmetb/kubectl-tree

    Categories: Fusion Middleware

    Spring Boot JPA project riff function demo

    Tue, 2019-12-17 22:09
    riff is an Open Source platform for building and running Functions, Applications, and Containers on Kubernetes. For more information visit the project riff home page https://projectriff.io/

    riff supports running containers using Knative serving which in turn provides support for
    •     0-N autoscaling
    •     Revisions
    •     HTTP routing using Istio ingress
    Want to try an example? If so head over to the following GitHub project which will show to do this step by step for Spring Data JPA function running using riff on a GKE cluster when required

    https://github.com/papicella/SpringDataJPAFunction


    More Information

    1. Project riff home page
    https://projectriff.io/

    2. Getting started with riff
    https://projectriff.io/docs/v0.5/getting-started

    Categories: Fusion Middleware

    k8s info: VMware Tanzu Octant - A web-based, highly extensible platform for developers to better understand the complexity of Kubernetes clusters

    Tue, 2019-12-03 10:33
    Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer's toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities

    So how would I install this?

    1. First on my k8s cluster lets create a deployment and a service. You can skip this step if you already have workloads on your cluster. These commands will work on any cluster as the image exists on DockerHub itself so as long as you can get to DockerHub these kubectl commands will work.

    $ kubectl run pbs-demo --image=pasapples/pbs-demo-image --replicas=2 --port=8080
    $ kubectl expose deploy pbs-demo --type=LoadBalancer --port=80 --target-port=8080
    $ http http://101.195.48.144/customers/1

    HTTP/1.1 200
    Content-Type: application/hal+json;charset=UTF-8
    Date: Tue, 03 Dec 2019 16:11:54 GMT
    Transfer-Encoding: chunked

    {
        "_links": {
            "customer": {
                "href": "http://101.195.48.144/customers/1"
            },
            "self": {
                "href": "http://101.195.48.144/customers/1"
            }
        },
        "name": "pas",
        "status": "active"
    }

    2. To install Octant you can view instructions on the GitHub page as follows

    https://github.com/vmware-tanzu/octant

    Given I am on a Mac it's installed using brew as shown below. For other OS refer to link above

    $ brew install octant

    3. Thats it you can now launch the UI as shown below.

    $  octant

    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "deployment/configuration", "module-name": "overview"}
    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/containerEditor", "module-name": "overview"}
    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/serviceEditor", "module-name": "overview"}
    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "octant/deleteObject", "module-name": "configuration"}
    2019-12-03T21:47:56.272+0530 INFO dash/dash.go:370 Using embedded Octant frontend
    2019-12-03T21:47:56.277+0530 INFO dash/dash.go:349 Dashboard is available at http://127.0.0.1:7777

    Octant should immediately launch your default web browser on 127.0.0.1:7777

    And to view our deployed application!!!!







    It's a nice UI and it even has the ability to switch to a different k8s context from the menu bar itself



    More Information

    1. Seeing is Believing: Octant Reveals the Objects Running in Kubernetes Clusters
    https://blogs.vmware.com/cloudnative/2019/08/12/octant-reveals-objects-running-in-kubernetes-clusters/

    2. GitHub project page
    https://github.com/vmware-tanzu/octant

    Categories: Fusion Middleware

    k8s info: kubectx and kubens to the rescue

    Tue, 2019-12-03 05:54
    kubectx is a utility to manage and switch between kubectl(1) contexts. To me this is so handy I can't live without it. I am constantly using k8s everywhere from PKS (Pivotal Container Service) clusters, GKE clusters, minikube and wherever I can get my hands on a cluster.

    So when I heard about kubectx and no I can't live with this and it makes my life so much easier. His how

    Where is my current k8s context and potentially what other contexts could I switch to?


    Ok so I am in the k8s cluster with the context of "apples". Let's switch to "lemons" then


    It's really as simple as that. In my world every k8s cluster is named after a FRUIT.

    Finally if you wish to set the correct context namespace you can use "kubens" to do that just as easily as shown below



    More Information

    https://github.com/ahmetb/kubectx

    https://formulae.brew.sh/formula/kubectx
    Categories: Fusion Middleware

    Pages