eldermoraes.com alive and kicking!

This blog is officially migrated to a self hosted solution and now really working as “eldermoraes.com” (not just a redirection anymore).

All posts were migrated (and also the stats… thanks to Jetpack!).

If you have a bookmark… thank you and please change it! 😉


Running Postgres 8.2 on Docker for Windows

First: don’t!

It’s a nightmare, it’s tricky, it isn’t reliable…

Second: why the heck somebody are still using Postgres 8.2? Well, don’t ask me…

If you are still here, let’s do it.

What’s the real problem here? Well, the key point is that Postgres has a very restrict acess to its directory $PGDATA (default path is $PG_HOME/data). The reason isn’t too much hard to find: it stores all databases files.

So the user that owns the Postgres process must be the same that owns the $PGDATA directory. Otherwise the process simply won’t run.

If you just run a default Postgres container, the things go smoothly. But you are not a default developer, so you may want to mount the “data” to an external directory in the host. This is where the things become funny (or not).

If you run Docker in Linux/Mac, don’t worry: just give the directory ownership to the same user that owns the Docker process. Run and sleep the sleeping of the just.

But if you are trying the Docker for Windows the things go a little further. For instance there is an issue opened at: https://github.com/docker/for-win/issues/39

In a happy Linux world you would do this mounting just like this:

docker run -d --name postgres \
-v /opt/postgres/data:/var/lib/postgresql/data
-p 5432:5432 postgres

Don’t try this at home if yours is a Windows home. Docker will say to you that you that the host volume must be owned by the same user that owns the Docker process.

So you go happy and confident and change the directory ownership. Wrong! No result at all. Don’t change a thing, don’t move an inch.

Then you spend the whole night trying to handle this and begins to think it’s better to give up the project… When you realize you could handle it with Docker Volumes. Yay!

docker volume create --name postgres_data -d local

And now mount the $PGDATA to the just created volume

docker run -d --name postgres \
-v postgres_data:/var/lib/postgresql/data
-p 5432:5432 postgres

And it worked! Awesome! Makes totally sense as the volume is managed by the Docker service itself so it handle its permission.

Let’s finally sleep and keep the client happy…

Not so fast. Your container is really working and you can access the Postgres as usual… until you decide to stop the container and run it again…

/var/lib/postgresql/data has group or world access

Long story short… for some reason that I don’t know and really don’t care Docker mess up with the permissions after it start the container just once.

Let’s dig a little deeper…

You must to manage this crazy permissions in the Dockerfile before the volume is defined. And you have to do it in a right defined order! Just like this:

ENV PGDATA /var/lib/postgresql/data
RUN mkdir -p $PGDATA
RUN chown -R postgres $PGDATA
RUN chmod 0700 -R $PGDATA
VOLUME /var/lib/postgresql/data

If you change this order it won’t work.

If you are using some image from Docker Hub I recommend you to take it’s Dockerfile and customized as you need, because if the original Dockerfile deal with this volume in a different way, doesn’t matter what you do… it will not work!

Hope it maybe useful for somebody!




Modular and Reusable Java EE Architecture with Docker

One more article published at Oracle Community Directory by me, my friend Bruno Souza and now with an incredible contribution from Andre de Carvalho.

This article is the second part of our series that has started with the “Step by Step High Availability with Docker and Java EE“. Now we added some more complexity and features to the scenario.

Enjoy it at:


And leave your comments bellow! 😉

Building a Wildfly Cluster Using Docker

Hi there!

Using Docker to deal with some daily challenges could be a funny and also rewarding stuff. Sometimes I got amazed on how somethings are easily handled using containers.

My last challenge was building a Wildfly cluster using Docker.

Why Wildfly? Because I like it. Why Docker? Because I love it.


If you are anxious and wanna go straight to the point, here is the link where you can clone and run the example by yourself:


If you wanna understand what the heck I did there, let’s go!

Domain x Standalone

I was tempted to go with a Domain cluster for reasons that I just can’t remember. After got my head knocked for a couple of hours I discovered that a domain cluster cannot be used in the way that I would like to.

Why? Because when you use Wildfly in Domain mode you don’t have a deployment scanner. It means that you have to do your deployment with it’s UI or thru a CLI.

Well none of them were ok to me, because I was trying to build a Docker Appliance. What is a Docker Appliance?

Docker Appliance is a Docker image customized for your own needs specially when you use it to distribute your own application

This quote was said by me… right here, right now…

My intention here is to build a Docker Wildfly image with an application so it can be automatically deployed when a new container is built from that appliance.

The only option for me is the Standalone mode. So let’s get rid of Domain mode for now.

Docker Network

This was the last thing I fixed in order to make the things work, but as I am cool with you I’ll talk about it as first step.

Having a network well defined for your containers will be key for the configurations that I’ll show bellow, so let’s build it:

docker network create \
 --driver=bridge \
 --subnet= \
 --ip-range= \
 --gateway= \

You can use whichever subnet, range and gateway that you want. Just do it properly.

Ah… “wildnetwork” is the name of the network that is being created. Yeah, baby… it’s a wild wild world…


If you go to the “${WILDFLY_HOME}\standalone\configuration” folder you will find these files:


The default is the “standalone.xml”. If you want to use some cool Wildfly features you can use the other files. For this example we will use the “standalone-ha.xml” (when we run the containers).

We had to customize this file for each container. As we are using three containers for this example, we did three versions of this file (you’ll see it in the Dockerfile bellow).

You need to open the file, find the “interfaces” node and do something like this:

<interface name="management">
<inet-address value="${jboss.bind.address.management:}"/>
<interface name="public">
<inet-address value="${jboss.bind.address:}"/>
<interface name="private">
<inet-address value="${jboss.bind.address.private:}"/>

Of course use the right IP…


You will build your appliance with a Dockerfile. For this example, my Dockerfile is just like this:

FROM jboss/wildfly

# Environment variable with default value
ARG APP_FILE=appfile.war

# Add your application to the deployment folder
ADD ${APP_FILE} /opt/jboss/wildfly/standalone/deployments/${APP_FILE}

# Add standalone-ha.xml - set your own network settings
ADD standalone-ha-1.xml /opt/jboss/wildfly/standalone/configuration/standalone-ha-1.xml
ADD standalone-ha-2.xml /opt/jboss/wildfly/standalone/configuration/standalone-ha-2.xml
ADD standalone-ha-3.xml /opt/jboss/wildfly/standalone/configuration/standalone-ha-3.xml

# Add user for adminstration purpose
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123 --silent


  • “FROM” will get a pre-built image from Docker Hub (or locally if you have already used it before);
  • “ARG” is a environment variable. I use it to pass the application file as an argument and make my Dockerfile more flexible and reusable;
  • “ADD” will add the application to the deployment folder, and any other file that I want (as the standalone*.xml)
  • “RUN” line will create a “admin” user with “admin123” password. This can be useful if you want to log into the administration UI.

Now I am ready to build the appliance and run the containers.

Image (Appliance) and Containers

Now we can build our customized image (appliance):

docker build -t wildfly-cluster --build-arg APP_FILE=apptest.war .

So this single line command will build a image (appliance) called “wildfly-cluster” deploying the “apptest.war”. Don’t forget the dot at the end, this means that there is a Dockerfile at the current folder.

Ah… the “apptest.war” must be at the current folder too.

Image built. Let’s run the containers and put some fire at the… oh, forget it:

docker run -d --name wild1 -h wild1 -p 8080:8080 -p 9990:9990 --network=wildnetwork --ip wildfly-cluster /opt/jboss/wildfly/bin/standalone.sh -c standalone-ha-1.xml -u
docker run -d --name wild2 -h wild2 -p 8081:8080 -p 9991:9990 --network=wildnetwork --ip wildfly-cluster /opt/jboss/wildfly/bin/standalone.sh -c standalone-ha-2.xml -u
docker run -d --name wild3 -h wild3 -p 8082:8080 -p 9992:9990 --network=wildnetwork --ip wildfly-cluster /opt/jboss/wildfly/bin/standalone.sh -c standalone-ha-3.xml -u

Lot of things happening:

  • “-d” will detach the console from the terminal so you won’t see the log messing your screen;
  • “–name” will name the container (wild1, wild2, wild3)
  • “-h” will name the host
  • “-p” will expose the ports that I want/need and give alias to them
  • “–network” will specify the Docker network that I want to use
  • “–ip” will define the IP. This is VERY VERY important in order to make the cluster works
  • “wildfly-cluster” is the image used (our appliance)
  • “/opt/jboss…/standalone.sh” is the script used to start the Wildfly in the container
  • “-c standalone-ha-1.xml” will specify the configuration file for this container (details above in the Standalone.xml section);
  • “-u” is specifying the IP used for the multicast

Finally we put them all together and happy with a load balancer:

docker run -d --name wild-balancer -p 80:80 \
  --link wild1:wild1 \
  --link wild2:wild2 \
  --link wild3:wild3 \
  --env-file ./env.list \
  --network=wildnetwork \
  --ip jasonwyatt/nginx-loadbalancer

The “–env-file” argument is passing a file that has some important environment variables for the load balancer. Clone the repository to see the details.

If you came until here you probably wanna see if it works! Just open your browser and go to the link:


You should see a result like this:

Note that while we are refreshing the page the IP and Hostname changes, but the Session ID does not. It means that the cluster is working and it is caching the session between the nodes.

If the session changes, something gone bad… 😦

Like it? Leave a comment! Didn’t like it? Also leave a comment and let’s talk about it! 😉


P.S.: I should also say that some links helped me to write this post:





JavaOne Session Video

We are having such a great days here in JavaOne 2016 (San Francisco). Today was the day that we did our talk – “High Availability with Docker and JavaEE (Step by Step)” – and fortunately it was recorded! So I’ll give you the link in the case you wanna watch it.

I’ll give some impressions about some points of view, but probably will wait some more days to do it.

Enjoy and leave your comments!

Check the video here!

JavaOne LA 2016 – Part 2 – Being a Speaker

It’s been really really difficult to keep on posting here… I got to deal with this!

On the second part of the serie “JavaOne LA 2016”, now I will share some thoughts about the amazing experience of being a speaker in a big event like JavaOne.

Some people would ask: “why being a speaker?”, “why give a talk in a conference?”, “why expose yourself?“.

For me this is all about improving yourself and improving your career. That’s why I decided to give myself a try this year and submit papers to some of the most important IT events in Brazil and in the World.

To prepare yourself for this kind of task you should use some skills that maybe you don’t use in daily basis (specially if you are a developer/architect like me). You need to research, write, prepare presentation, and research again, build demos… you need even to smile when someone ask a question that you have absolute no idea of what he/she is talking about!

(This last part I do in daily basis…)

Being at JavaOne LA was even more interesting as it is probably one of the biggest IT events in Latin America (if not the biggest one). There you can find tons of people to share experience and interests. And, even better, I could find tons of people that know much more than me and are much better than me in so many ways (not that this is a hard work to accomplish…).

Oh… talking about my talk… here you can see the presentation! 😉

Worthy of mention are my friend Bruno Souza and Andre Carvalho. Our partnership was key to have this task done.

Next September we all three will be at JavaOne US (San Francisco) giving another talk. Surely another great experience worthy of a couple of posts!

Leave a comment if you feel like too… 😉