WSO2 ESB - A Quick Glance at the Capabilities

It's a big but connected world with huge number of entities communicating by different languages and different protocols. In a service oriented architecture there is a set of such entities/components providing and consuming different services.
For these heterogeneous entities to communicate there need be a person in the middle who can speak with all of them regardless of the languages they speak and protocols they follow. Also, in order to deliver a useful service for a consumer there need to be a person to orchestrate the services provided from different entities. This person better be fast and able to handle concurrency pretty well.
Abstractly speaking Enterprise Service Bus(ESB) is that person and WSO2 ESB is the best opensource option out there if you need one!

Following are the main functionalities WSO2 ESB provides [2]

1. Service mediation

2. Message routing

3. Data transformation

4. Data transportation

5. Service hosting

Even though ESB could cover most of the integration use cases that you might need to implement, there are many extension points you could use in case you are unable to implement your use case with built-in capabilities.

You can download WSO2 ESB at [1] and play with it! [2] is a great article that you must read that would quickly walk you through what WSO2 ESB has to offer!


SFTP protocol over VFS transport in WSO2 ESB 5.0.0

The Virtual File System (VFS) transport is used by WSO2 ESB to process files in the specified source directory. After processing the files, it moves them to a specified location or deletes them.

Let's look at a sample scenario how we can use this functionality of WSO2 ESB.
Let's say you need to periodically check a file system location on a given remote server and if a file is available you need to send an email attaching that file and move that file to some other file system location. This can be achieved as follows.

1. Let's first configure your remote server so that ESB could securely communicate with it over SFTP.
First create a public-private key pair with ssh.

run ssh-keygen command.

manurip@manurip-ThinkPad-T540p:~/Documents/Work/Learning/blogstuff$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/manurip/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/manurip/.ssh/id_rsa.
Your public key has been saved in /home/manurip/.ssh/
The key fingerprint is:
c3:57:b2:82:ee:d3:b3:74:55:bf:9c:93:b7:7a:2e:df manurip@manurip-ThinkPad-T540p
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|          . . .  |
|       o   + . . |
|      . S o .   .|
|     .   + .  . +|
|      ... .    *.|
|     ...o.   . .=|
|      ...o   .*+E|

Now open your .ssh folder which is located at /home/user in linux and open the file which contains the public key.  Copy that and log in to your remote server and copy that and paste it in ~/.ssh/authorized_keys file.

2. Now, let's configure ESB.
First we need to enable VFS transport receiver so that we can monitor and receive the files from our remote server. To do that uncomment the following line in ESB-home/repository/conf/axis2/axis2.xml

<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>

Also, we need to be able to send a mail. For that, uncomment the following line as well from the same file. Also fill in the configuration. In case you will be using a gmail address to send mail, the configuration would be as following.

<transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
        <parameter name=""></parameter>
        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>
        <parameter name="mail.smtp.user"></parameter>
        <parameter name="mail.smtp.password">password</parameter>
        <parameter name="mail.smtp.from"></parameter>

3. Now, create the following proxy service and sequence and save them in ESB-home/repository/dpeloyment/server/synapse-configs/default/proxy-services and ESB-home/repository/dpeloyment/server/synapse-configs/default/sequences respectively.

Here is the proxy service

Here, if your private key is in a different location(means its not at ~/.ssh/) or the name is different(i.e. name is not id_rsa) you will need to provide it as a parameter as follows.

<parameter name="transport.vfs.SFTPIdentities">/path/id_rsa_custom_name</parameter>

Here you can see that we have referred to sendMailSequence in our proxy service via the sequence mediator. The sendMailSequence will be as follows.

5. Now we are good to go! Go ahead and start WSO2 ESB. And log in to your remote server and create an xml file(say test.xml) in /home/user/test/source which the location we gave as the value for transport.vfs.FileURI property. Soon after doing that you will see that it gets moved to /home/user/dest which the location we gave as the value for transport.vfs.MoveAfterProcess property. Also an email with test.xml attached will be sent to the email address you specified in your sendMailSequence.xml.

Also if you added the log mediators I have put in the proxy service and sendMailSequence you should see the similar logs as follows in the wso2carbon.log.

[2016-12-13 22:04:28,510]  INFO - LogMediator log = ====VFS Proxy====
[2016-12-13 22:04:28,511]  INFO - LogMediator sequence = sendMailSequence


Dynamically provisioning Jenkins slaves with Jenkins Docker plugin

In Jenkins we have the master slave architecture where we have configured one machine as master, and some other machines as slaves. We can have a preferred number of executors in each of these machines. Following illustrates that deployment architecture.

In this approach, the concurrent builds in a given Jenkins slave are not isolated. All the concurrent builds in a given slave would be running in the same environment. If we need several builds to be run inside the same slave those builds should need same environment and actions should have taken to avoid issues such as port conflicts. This prevents us from utilizing the resources in a given slave.

With Docker we can address the above problems which are caused by the inability to isolate the builds. Jenkins Docker plugin allows a docker host to dynamically provision a slave, run a single build, then tear-down that slave. Following illustrates the deployment architecture.

I'll list down the steps to follow to get this done.

First let's see what needs to be done in Jenkins master.
1. Install Jenkins in one node which would be the master node. To install Jenkins, you could either run Jenkins jar directly or deploy the jar in tomcat.

2. Install Jenkins Docker Plugin[1]

Now lets see how to configure nodes which you are using to put up slave containers in.

3. Install Docker engine in each of the nodes. Please note that due to a bug[2] in Docker plugin you need to use a docker version below 1.12. Note that I was using Docker plugin version 0.16.1.

echo deb [arch=amd64] ubuntu-trusty main > /etc/apt/sources.list.d/docker.list

apt-get update

apt-get install docker-engine=1.11.0-0~trusty

4.  Add the current user to the docker group - not a required step. If this is not done you will need to use root privileges(use sudo) to issue docker commands. And once step 3 is done anyone with the keys can give any instructions to Docker daemon. No need of sudo or being in docker group.

You can test if the installation is successful by running hello-world container
docker run hello-world

5. This is not a mandatory step but if you need to protect the docker daemon, by following [3] create a CA, server and client keys.
(Note that by default Docker runs via a non-networked Unix socket. It can also optionally communicate using an HTTP socket, and in order to do our job we need it to be able to communicate through an HTTP socket. And for Docker to be reachable via the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker’s tlscacert flag to a trusted CA certificate which is what are doing in this step)

6.  configure /etc/default/docker as follows.
DOCKER_OPTS="--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/server-cert.pem --tlskey=/path/to/server-key.pem -H tcp://"

Now let's see what are the configurations to be done in Jenkins master. We need jenkins master know about the nodes we previously configured to run slave containers in.

7. Go to https://yourdomain/jenkins/configure.
What Docker plugin does is adding Docker as a jenkins cloud provider. So each node we have will be a new “cloud”. Therefore for each node, throught “Add new cloud” section, add a clould of the type “Docker”. Then we need to fill configuration options as appropriate. Note that the Docker URL should be something like https://ip:2376 or https://thedomain:2376 where ip/thedomain are the ip or the domain of the node you are adding. 

8. If you did follow step 3, in credentials section, we need to “Add” new credentials of the type “Docker certificates directory”. This directory should contain the server keys/CA/certs. Please note that you will need to have the ca,cert, client key names exactly as ca.pem, cert.pem and key.pem because I think those names are hardcoded in docker plugin source code therefore if custom names are put it won't work (I experienced it!)

9. You can press “Test Connection” button to test if the docker plugin could successfully communicate with our remote docker host. If it is successful, the docker version of the remote host should appear once the button is pressed. Note that if you have docker 1.12* installed, you will still see the the connection is successful but once you try building a job, you will get an exception since docker plugin has an issue with that version.

10. Under “Images” section, we need to add our docker image by “Add Docker template”. Note that you must have this image in your nodes you previously configured or need to have it in docker hub so that it can be pulled. 
Here also there are some other configurations to be done. Under, “Launch method” choose, “Docker SSH Computer Launcher” and add the credentials of the docker container which is created by our docker image. Note that these are NOT the credentials for the node itself but the credentials of our dynamically provisioned docker jenkins slaves.
Here, you can add a label to your docker image. This is a normal jenkins label which can be used to bind jobs to a given label.

11. Ok now we are good to try running one of our jenkins build jobs in a Docker container! Bind the job you prefer to a docker image using the label you previously put and click "Build Now"! 

You should see something similar to following. (Look at the bottom left corner)

Here we can see a new node named "docker-e86492df7c41" where "docker" is the  name I put for the docker cloud I had created and "e86492df7c41" is the ID of the docker container which was dynamically spawned to build the project.


WSO2 ESB Foreach mediator example

Say you have a payload. For example,

Here, you have an array with the root elemet "data". And you need to perform some processing for each of these objects in the array and send the resulting payload to an endpoint. I'm going to use foreach mediator to achieve this.
Lets say following is the resulting payload you want.

This payload contains, a root element "info" under which there is an array of objects having,
1. id element which is created using value of "name" element from original payload, converted to uppercase , value of "id" element from original payload, and the string "ID"
2. classid element which is the value of "id" element from the original payload
3. The "name" from original payload in uppercase.
4. university element which is hard-coded
5. department element which is also hard-coded

And you need to send this resulting payload to some endpoint in JSON format.

Inside foreach mediator I use script mediator to convert the name to uppercase. Then using payload factory mediator I create the payload. And outside the foreach mediator I use send mediator to send the resulting payload to the endpoint.

Here is the sequence.
Let's use the following API to call my-in-seq.
Add both above API and sequence to synapse config. We can send a curl request as follows and see how it works.


Creating a Carbon Component using WSO2 Carbon Component Archetype

A Carbon Component is an OSGi bundle which has a dependency on Carbon Kernel.
Using WSO2 Carbon Component Archetype which has been published to maven central, you can create a simple Carbon Component with one command.

mvn archetype:generate -DarchetypeGroupId=org.wso2.carbon -DarchetypeArtifactId=org.wso2.carbon.archetypes.component -DarchetypeVersion=5.0.0  -DgroupId=org.sample -DartifactId=org.sample.project -Dversion=1.0.0 -Dpackage=org.sample.project

Above command will create a Carbon Component with the following structure.

├── pom.xml
└── src
    └── main
        └── java
            └── org
                └── sample
                    └── project
                        └── internal

This Carbon Component consumes an OSGi service which is exposed from Carbon Core and also registers an OSGi service of its own.

If you do not pass the parameters for the project then the default values will be applied.

You can find the source for this archetype here.

Creating a generic OSGi bundle using WSO2 Carbon Bundle Archetype

You can create a generic OSGi bundle using one command with WSO2 Carbon Bundle Archetype.
Even though the name is WSO2 Carbon Bundle Archetype which has been published to maven central, this is a generic OSGi bundle.

Execute the following command. The groupID of the archetype is org.wso2.carbon,
The artifactID of the archetype is org.wso2.carbon.archetypes.bundle  and the version of the archetype is  5.0.0

mvn archetype:generate -DarchetypeGroupId=org.wso2.carbon -DarchetypeArtifactId=org.wso2.carbon.archetypes.bundle -DarchetypeVersion=5.0.0  -DgroupId=org.sample -DartifactId=org.sample.project -Dversion=1.0.0 -Dpackage=org.sample.project

 This will create a project with the following structure.

├── pom.xml
└── src
    ├── main
    │   └── java
    │       └── org
    │           └── sample
    │               └── project
    │                   ├──
    │                   └── internal
    │                       └──
    └── test
        └── java
            └── org
                └── sample
                    └── project

If you do not pass the parameters for the project then the default values will be applied.

The source for the archetype can be found here.

Unmarshalling an XML file with JAXB

This is a simple example for unmarshalling an XML file.

This is the XML file we are going to unmarshall
This is the schema of the above XML file
Now let's create
Here for each complex type I have created an inner class.
And for the element "item-name" which has the maxOccurs=unbounded, I have used a List.
Another thing is @XmlAccessorType(XmlAccessType.FIELD) is needed only if the annotated fields inside the class have public setters.  If you don't have public setters for the annotated fields inside the class you don't need to annotate the class.

Now let's write the code for unmarshalling.