Tuesday, July 12, 2016

OpenDJ Pets on Kubernetes





Stateless "12-factor" applications are all the rage, but there are some kinds of services that are inherently stateful. Good examples are things like relational databases (Postgres, MySQL) and NoSQL databases (Cassandra, etc).

These services are difficult to containerize, because the default docker model favours ephemeral containers where the data disappears when the container is destroyed.

These services also have a strong need for identity. A database "primary" server is different than the "slave". In Cassandra, certain nodes are designated as seed nodes, and so on.

OpenDJ is an open source LDAP directory server from ForgeRock. LDAP servers are inherently "pet like" insomuch as the directory data must persist beyond the container lifetime. OpenDJ nodes also replicate data between themselves to provide high-availability and therefore need some kind of stable network identity.

Kubernetes 1.3  introduces a feature called "Pet Sets" that is designed specifically for these kinds of stateful applications.   A Kubernetes PetSet provides applications with:
  • Permanent hostnames that persist across restarts
  • Automatically provisioned persistent disks per container that live beyond the life of a container
  • Unique identities in a group to allow for clustering and leader election
  • Initialization containers which are critical for starting up clustered applications

These features are exactly what we need to deploy OpenDJ instances.  If you want to give this a try, read on...

You will need access to a Kubernetes 1.3 environment. Using minikube is the recommended way to get started on the desktop. 

You will need to fork and clone the ForgeRock Docker repository to build the OpenDJ base image. The repository is on our stash server: 


To build the OpenDJ image, you will do something like:

cd opendj
docker build -t forgerock/opendj:latest . 

If you are using minikube,  you should connect your docker client to the docker daemon running in your minikube cluster (use minikube docker-env).  Kubernetes will not need to "pull" the image from a registry - it will already be loaded.  For development this approach will speed things up considerably.


Take a look at the README for the OpenDJ image. There are a few environment variables that the container uses to determine how it is bootstrapped and configured.  The most important ones:
  • BOOTSTRAP: Path to a shell script that will initialize OpenDJ. This is only executed if the data/config directory is empty. Defaults to /opt/opendj/boostrap/setup.sh
  • BASE_DN: The base DN to create. Used in setup and replication
  • DJ_MASTER_SERVER: If set, run.sh will call bootstrap/replicate.sh to enable replication to this master. This only happens if the data/config directory does not exist

There are sample bootstrap setup.sh scripts provided as part of the container, but you can override these and provide your own script.  

Next,  fork and clone the ForgeRock Kubernetes project here:

The opendj directory contains the Pet Set example.  You must edit the files to suit your needs, but as provided, the artifacts do the following:

  • Configures two OpenDJ servers (opendj-0 and opendj-1) in a pet set. 
  • Runs the  cts/setup.sh script provided as part of the docker image to configure OpenDJ as an OpenAM CTS server.
  • Optionally assigns persistent volumes to each pet, so the data will live across restarts
  • Assigns "opendj-0" as the master.  The replicate.sh script provided as part of the Docker image will replicate each node to this master.  The script ignores any attempt by the master to replicate to itself.  As each pet is added (Kubernetes creates them in order) replication will be configured between that pet and the opendj-0 master. 
  • Creates a Kubernetes service to access the OpenDJ instances. Instances can be addressed by their unique name (opendj-1), or by a service name (opendj) which will go through a primitive load balancing function (at this time round robin).  Applications can also perform DNS lookup on the opendj SRV record to obtain a list of all the OpenDJ instances in the cluster.

The replication topology is quite simple. We simply replicate each OpenDJ instance to opendj-0. This is going to work fine for small OpenDJ clusters. For more complex installations you will need to enhance this example.


To create thet petset:

kubectl create -f opendj/


If you bring up the minikube dashboard:

minikube dashboard 

You should see the two pets being created (be patient, this takes a while). 




Take a look at the pod logs using the dashboard or:

kubectl logs opendj-0 -f 

Now try scaling up your PetSet. In the dashboard, edit the Pet Set object, and change the number of replicas from 2 to 3:



















You should see a new OpenDJ instance being created. If you examine the logs for that instance, you will see it has joined the replication topology. 

Note: Scaling down the Pet Set is not implemented at this time. Kubernetes will remove the pod, but the OpenDJ instances will still think the scaled down node is in the replication topology. 




Friday, June 24, 2016

Creating an internal CA and signed server certificates for OpenDJ using cfssl, keytool and openssl


Yes, that title is quite a mouthful, and mostly intended to get the Google juice if I need to find this entry again.

I spent a couple of hours figuring out the magical incantations, so thought I would document this here.

The problem: You want OpenDJ to use something other than the default self-signed certificate for SSL connections.   A "real" certificate signed by a CA (Certificate Authority) is expensive and a pain to procure and install.

The next best alternative is to create your own "internal" CA, and  have that CA sign certificates for your services.   In most cases, this is going to work fine for *internal* services that do not need to be trusted by a browser.

You might ask why is this better than just using self-signed certificates?  The idea is that you can import your CA certificate once into the truststore for your various clients, and thereafter those clients will trust any certificate presented that is signed by your CA.

For example, assume I have OpenDJ servers:  server-1,server-2 and server-3.  Using only  self-signed certificates, I will need to import the certs for each server (three in this case) into my client's truststore. If instead, I use a CA, I need only import a single CA certificate. The OpenDJ server certificates will be trusted because they are signed by my CA.  Once you start to get a lot of services deployed using self-signed certificates becomes super painful. Hopefully, that all makes sense...

Now how do you create all these certificates?  Using CloudFlare's open source  cfssl utility, Java keytool, and a little openssl.

I'll spare you the details, and point you to this shell script which you can edit for your environment:

Here is the gist:



Monday, October 26, 2015

Kubernetes Namespaces and OpenAM




I have been conducting some experiments running the ForgeRock stack on Kubernetes. I recently stumbled on namespaces.

In a nutshell Kubernetes (k8) namespaces provide isolation for instances. The typical use case is to provide isolated environments for dev, QA, production and so on.

I had an "Aha!" moment when it occurred to me that namespaces could also provide multi-tenancy on a k8 cluster. How might this work?

Let's create a two node OpenAM cluster using an external OpenDJ instance:

See https://github.com/ForgeRock/fretes  for some samples used in this article

kubectl create -f am-dj-idm/

The above command launches all the containers found in the given directory, wires them up together (updates DNS records), and create a load balancer on GCE.

 If I look at my services:

 kubectl get service 

I see something like this:

NAME       LABELS          SELECTOR   IP(S) PORT(S) 
openam-svc name=openam-svc site=site1 10.215.249.206 80/TCP 
                                      104.197.122.164 

(note: I am eliding a bit of the output here for brevity)

That second IP for openam-svc is the external IP of the load balancer configured by Kubernetes. If you bring up this IP address you will see the OpenAM login page. 

Now, let's change my namespace to another instance. Say "tenant1" (I previously created this namespace)

kc config use-context tenant1 

A kubectl get services  should be empty, as we have no services running yet in the tenant1 namespace. 

So let's create some: 

kubectl create -f am-dj-idm/ 


This is the same command we ran before - but this time we are running against a different namespace.  

Looking at our services, we see:

NAME        LABELS            SELECTOR   IP(S) PORT(S) 
openam-svc  name=openam-svc   site=site1 10.215.255.185 80/TCP 
                                         23.251.153.176 

Pretty cool. We now have two OpenAM instances deployed, with complete isolation, on the same cluster, using only a handful of commands. 

Hopefully this gives you a sense of why I am so excited about Kubernetes.



Monday, September 14, 2015

A script to download ForgeRock nightly binaries



Here is a little script to download all of the nightly builds for the ForgeRock stack. Handy for testing!


Loading https://gist.github.com/wstrange/499008ad8cf29eeef28c

This file is part of the frstack project. You may find a more up to date copy here



Thursday, June 18, 2015

Sample todo app using Angular2 and Dart



Here is a sample todo app written in Angular2 and Dart. This is largely copied from David East's sample JS angular2 app.



Tuesday, May 26, 2015

Running OpenAM and OpenDJ on Kubernetes with Google Container Engine



Still quite experimental, but if you are adventurous, have a look at:

https://github.com/ForgeRock/frstack/tree/master/docker/k8


This will set up a two node Kubernetes cluster running OpenAM and OpenDJ.  This uses images on the Docker hub that provide nightly builds for OpenAM and OpenDJ.

I will be presenting this at the ForgeRock IRM summit this thursday. Fingers crossed that the demo gods smile down on me!


Thursday, April 2, 2015

Nice Demo of OpenAM log analysis using the ELK stack


The folks at Identropy have put together a nifty video showing analysis of OpenAM audit log events using the ELK stack (Elasticsearch, Logstash, Kibana).

Check it out here: