Running Postgres 8.2 on Docker for Windows

First: don’t!

It’s a nightmare, it’s tricky, it isn’t reliable…

Second: why the heck somebody are still using Postgres 8.2? Well, don’t ask me…

If you are still here, let’s do it.

What’s the real problem here? Well, the key point is that Postgres has a very restrict acess to its directory $PGDATA (default path is $PG_HOME/data). The reason isn’t too much hard to find: it stores all databases files.

So the user that owns the Postgres process must be the same that owns the $PGDATA directory. Otherwise the process simply won’t run.

If you just run a default Postgres container, the things go smoothly. But you are not a default developer, so you may want to mount the “data” to an external directory in the host. This is where the things become funny (or not).

If you run Docker in Linux/Mac, don’t worry: just give the directory ownership to the same user that owns the Docker process. Run and sleep the sleeping of the just.

But if you are trying the Docker for Windows the things go a little further. For instance there is an issue opened at: https://github.com/docker/for-win/issues/39

In a happy Linux world you would do this mounting just like this:


docker run -d --name postgres \
-v /opt/postgres/data:/var/lib/postgresql/data
-p 5432:5432 postgres

Don’t try this at home if yours is a Windows home. Docker will say to you that you that the host volume must be owned by the same user that owns the Docker process.

So you go happy and confident and change the directory ownership. Wrong! No result at all. Don’t change a thing, don’t move an inch.

Then you spend the whole night trying to handle this and begins to think it’s better to give up the project… When you realize you could handle it with Docker Volumes. Yay!


docker volume create --name postgres_data -d local

And now mount the $PGDATA to the just created volume


docker run -d --name postgres \
-v postgres_data:/var/lib/postgresql/data
-p 5432:5432 postgres

And it worked! Awesome! Makes totally sense as the volume is managed by the Docker service itself so it handle its permission.

Let’s finally sleep and keep the client happy…

Not so fast. Your container is really working and you can access the Postgres as usual… until you decide to stop the container and run it again…


/var/lib/postgresql/data has group or world access

Long story short… for some reason that I don’t know and really don’t care Docker mess up with the permissions after it start the container just once.

Let’s dig a little deeper…

You must to manage this crazy permissions in the Dockerfile before the volume is defined. And you have to do it in a right defined order! Just like this:


ENV PGDATA /var/lib/postgresql/data
RUN mkdir -p $PGDATA
RUN chown -R postgres $PGDATA
RUN chmod 0700 -R $PGDATA
VOLUME /var/lib/postgresql/data

If you change this order it won’t work.

If you are using some image from Docker Hub I recommend you to take it’s Dockerfile and customized as you need, because if the original Dockerfile deal with this volume in a different way, doesn’t matter what you do… it will not work!

Hope it maybe useful for somebody!

 

 

 

Advertisements

Building a Wildfly Cluster Using Docker

Hi there!

Using Docker to deal with some daily challenges could be a funny and also rewarding stuff. Sometimes I got amazed on how somethings are easily handled using containers.

My last challenge was building a Wildfly cluster using Docker.

Why Wildfly? Because I like it. Why Docker? Because I love it.

docker-wildfly

If you are anxious and wanna go straight to the point, here is the link where you can clone and run the example by yourself:

https://github.com/eldermoraes/wildfly-cluster-docker

If you wanna understand what the heck I did there, let’s go!

Domain x Standalone

I was tempted to go with a Domain cluster for reasons that I just can’t remember. After got my head knocked for a couple of hours I discovered that a domain cluster cannot be used in the way that I would like to.

Why? Because when you use Wildfly in Domain mode you don’t have a deployment scanner. It means that you have to do your deployment with it’s UI or thru a CLI.

Well none of them were ok to me, because I was trying to build a Docker Appliance. What is a Docker Appliance?

Docker Appliance is a Docker image customized for your own needs specially when you use it to distribute your own application

This quote was said by me… right here, right now…

My intention here is to build a Docker Wildfly image with an application so it can be automatically deployed when a new container is built from that appliance.

The only option for me is the Standalone mode. So let’s get rid of Domain mode for now.

Docker Network

This was the last thing I fixed in order to make the things work, but as I am cool with you I’ll talk about it as first step.

Having a network well defined for your containers will be key for the configurations that I’ll show bellow, so let’s build it:

docker network create \
 --driver=bridge \
 --subnet=172.28.0.0/16 \
 --ip-range=172.28.5.0/24 \
 --gateway=172.28.5.254 \
 wildnetwork

You can use whichever subnet, range and gateway that you want. Just do it properly.

Ah… “wildnetwork” is the name of the network that is being created. Yeah, baby… it’s a wild wild world…

Standalone-ha.xml

If you go to the “${WILDFLY_HOME}\standalone\configuration” folder you will find these files:

standalone.xml
standalone-ha.xml
standalone-full.xml
standalone-full-ha.xml

The default is the “standalone.xml”. If you want to use some cool Wildfly features you can use the other files. For this example we will use the “standalone-ha.xml” (when we run the containers).

We had to customize this file for each container. As we are using three containers for this example, we did three versions of this file (you’ll see it in the Dockerfile bellow).

You need to open the file, find the “interfaces” node and do something like this:

 <interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:172.28.5.1}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:172.28.5.1}"/>
</interface>
<interface name="private">
<inet-address value="${jboss.bind.address.private:172.28.5.1}"/>
</interface>
</interfaces>

Of course use the right IP…

Dockerfile

You will build your appliance with a Dockerfile. For this example, my Dockerfile is just like this:

FROM jboss/wildfly

# Environment variable with default value
ARG APP_FILE=appfile.war

# Add your application to the deployment folder
ADD ${APP_FILE} /opt/jboss/wildfly/standalone/deployments/${APP_FILE}

# Add standalone-ha.xml - set your own network settings
ADD standalone-ha-1.xml /opt/jboss/wildfly/standalone/configuration/standalone-ha-1.xml
ADD standalone-ha-2.xml /opt/jboss/wildfly/standalone/configuration/standalone-ha-2.xml
ADD standalone-ha-3.xml /opt/jboss/wildfly/standalone/configuration/standalone-ha-3.xml

# Add user for adminstration purpose
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123 --silent

Explaining:

  • “FROM” will get a pre-built image from Docker Hub (or locally if you have already used it before);
  • “ARG” is a environment variable. I use it to pass the application file as an argument and make my Dockerfile more flexible and reusable;
  • “ADD” will add the application to the deployment folder, and any other file that I want (as the standalone*.xml)
  • “RUN” line will create a “admin” user with “admin123” password. This can be useful if you want to log into the administration UI.

Now I am ready to build the appliance and run the containers.

Image (Appliance) and Containers

Now we can build our customized image (appliance):

docker build -t wildfly-cluster --build-arg APP_FILE=apptest.war .

So this single line command will build a image (appliance) called “wildfly-cluster” deploying the “apptest.war”. Don’t forget the dot at the end, this means that there is a Dockerfile at the current folder.

Ah… the “apptest.war” must be at the current folder too.

Image built. Let’s run the containers and put some fire at the… oh, forget it:

docker run -d --name wild1 -h wild1 -p 8080:8080 -p 9990:9990 --network=wildnetwork --ip 172.28.5.1 wildfly-cluster /opt/jboss/wildfly/bin/standalone.sh -c standalone-ha-1.xml -u 230.0.0.4
docker run -d --name wild2 -h wild2 -p 8081:8080 -p 9991:9990 --network=wildnetwork --ip 172.28.5.2 wildfly-cluster /opt/jboss/wildfly/bin/standalone.sh -c standalone-ha-2.xml -u 230.0.0.4
docker run -d --name wild3 -h wild3 -p 8082:8080 -p 9992:9990 --network=wildnetwork --ip 172.28.5.3 wildfly-cluster /opt/jboss/wildfly/bin/standalone.sh -c standalone-ha-3.xml -u 230.0.0.4

Lot of things happening:

  • “-d” will detach the console from the terminal so you won’t see the log messing your screen;
  • “–name” will name the container (wild1, wild2, wild3)
  • “-h” will name the host
  • “-p” will expose the ports that I want/need and give alias to them
  • “–network” will specify the Docker network that I want to use
  • “–ip” will define the IP. This is VERY VERY important in order to make the cluster works
  • “wildfly-cluster” is the image used (our appliance)
  • “/opt/jboss…/standalone.sh” is the script used to start the Wildfly in the container
  • “-c standalone-ha-1.xml” will specify the configuration file for this container (details above in the Standalone.xml section);
  • “-u” is specifying the IP used for the multicast

Finally we put them all together and happy with a load balancer:

docker run -d --name wild-balancer -p 80:80 \
  --link wild1:wild1 \
  --link wild2:wild2 \
  --link wild3:wild3 \
  --env-file ./env.list \
  --network=wildnetwork \
  --ip 172.28.5.4 jasonwyatt/nginx-loadbalancer

The “–env-file” argument is passing a file that has some important environment variables for the load balancer. Clone the repository to see the details.

If you came until here you probably wanna see if it works! Just open your browser and go to the link:

http://localhost/apptest/

You should see a result like this:

Note that while we are refreshing the page the IP and Hostname changes, but the Session ID does not. It means that the cluster is working and it is caching the session between the nodes.

If the session changes, something gone bad… 😦

Like it? Leave a comment! Didn’t like it? Also leave a comment and let’s talk about it! 😉

 

P.S.: I should also say that some links helped me to write this post:

https://hub.docker.com/r/jboss/wildfly/

https://docs.jboss.org/author/display/WFLY10/Clustering+and+Domain+Setup+Walkthrough

https://docs.jboss.org/author/display/WFLY8/Load+Balanced+HA+Standalone+Cluster+-+Howto

https://goldmann.pl/blog/2013/10/07/wildfly-cluster-using-docker-on-fedora/

JavaOne Session Video

We are having such a great days here in JavaOne 2016 (San Francisco). Today was the day that we did our talk – “High Availability with Docker and JavaEE (Step by Step)” – and fortunately it was recorded! So I’ll give you the link in the case you wanna watch it.

I’ll give some impressions about some points of view, but probably will wait some more days to do it.

Enjoy and leave your comments!

Check the video here!

Using Docker to deal with a TomEE missing feature

When I and Bruno Souza were writing the article “Step-by-Step High Availability with Docker and Java EE” we faced a little issue with an Apache TomEE cluster feature: the “hot” deployment thru the cluster nodes isn’t still available, although the configuration at the server.xml is already defined:

<Deployer
className=”org.apache.catalina.ha.deploy.FarmWarDeployer”
deployDir=”/tmp/war-deploy/”
tempDir=”/tmp/war-temp/”
watchDir=”/tmp/war-listen/”
watchEnabled=”true”/>

The point is: if you deploy a new version of your application in one node, it doesn’t spread thru your entire cluster. At the first moment we look at this as some kind of a tricky situation that we must deal with.

But talking for just a few moments we realized that the article purpose itself was the answer for that issue! How? Simple…

When we create an appliance (for the definition, take a look at the article) we already have the application deployed to our application server. So when we use this appliance to build our cluster (running multiple Docker instances with properly configurations) we don’t need any other deployment at all!

Containers to the rescue! 😉