Latest Entries »

JSON Data Services Using WSO2 DSS

NOTE: The following approach is only needed until DSS 3.0.0, after DSS 3.0.0, JSON support works without any special content types.

JSON is a popular data format used frequently because of its simplicity and ease of use. WSO2 DSS has built-in support for querying data using JSON. When sending an HTTP request to a data service endpoint, the server identifies the format of the data, i.e. SOAP, JSON etc.., using the “Content-Type” HTTP header. Below contains the content types for some of the well-known formats:

  • text/xml – SOAP
  • application/xml – POX
  • application/json – JSON (mapped notation)
  • application/json/badgerfish – JSON (badgerfish notation)

Here, the two JSON notations represents how the JSON <-> XML conversion happens, this is because, internally in data services, we deal with handling XML element/attributes and namespaces. The “mapped” notation does not support namespaces, only “badgerfish” does. So when using data services, we should use the content type “application/json/badgerfish”. A quick reference on translating XML to badgerfish JSON can be found here [1].

So now for  a small demo, if we deploy the “RDBMSSample” [2] which ships with WSO2 DSS, we will get an HTTP endpoint “http://localhost:9763/services/RDBMSSample/&#8221;. So now we can send an HTTP request to this EPR with a JSON payload. We can use the “curl” tool for this. Below shows a sample run of this.

$ curl --data '{"employeesbynumber":{"employeenumber":{"$":"1002"}}}' http://localhost:9763/services/RDBMSSample --header Content-Type:"application/json/badgerfish" --header SOAPAction:"urn:employeesByNumber"

In the request, we see that, we set the “Content-Type” header to “application/json/badgerfish”, and also we set another HTTP header called “SOAPAction” to “urn:employeesByNumber”, this is basically passed in as the SOAP action to the internal SOAP engine to resolve the operation of the service, and this can be found in the service WSDL. This must be passed here because, the service EPR itself, does not provide any clue about the service operation, it only identifies the service, so the SOAPAction must be provided for the service dispatch to happen properly. The SOAPAction can be omitted, if we mention the service operation in the EPR itself. Then the service endpoint URL should be changed to “http://localhost:9763/services/RDBMSSample/employeesByNumber&#8221;. Below contains a sample run with this configuration.

$ curl --data '{"employeesbynumber":{"employeenumber":{"$":"1002"}}}' http://localhost:9763/services/RDBMSSample/employeesByNumber --header Content-Type:"application/json/badgerfish"




In WSO2 DSS, while we have the option of using several in-built input validators, we also have the facility of writing our own custom validators. This is done by simply implementing the Java interface “org.wso2.carbon.dataservices.core.validation.Validator”, which can be found in the core data services jar, namely “org.wso2.carbon.dataservices.core-x.y.z.jar”.

Below contains the definition of the Validator interface.

public interface Validator {
    public void validate(ValidationContext context, String name,
            ParamValue value) throws ValidationException;

So here, we are presented with several parameters. Below contains the usage.

  • context : This is the “validation context”, which means, you get access to the environment the validation is taken place. This is essentially giving access to the other variables in the input
  • name : The name of the validating parameter
  • value : The value of the validating parameter

I will be showing a simple example on how to write a custom validator and data service which uses it.

The validator is based on the sample H2 database that ships with WSO2 DSS, so you wont have to worry about creating a database. Here, the Employee table will be used to insert records and to validate it. The validation criteria is as follows,

  1. If both first/last name are not there, email must be there
  2. Only one of the first/last name cannot be given, both must be given, or neither
  3. If both first/last names and email are there, then email should be in a specific format, email = lowercase(first four letters of last name + first letter of first name) +

The Java code for org.acme.EmployeeEmailValidator class can be found at

The data service “AddEmployeeDS.dbs” definition can be found at

You will have to compile the “” file and create a jar file out of it. When doing the compile, point your build path to “/wso2dataservices-x.y.z/repository/components/plugins/org.wso2.carbon.dataservices.core-x.y.z.jar”. So after building and creating the jar file, copy it to “/wso2dataservices-x.y.z/repository/components/lib”, and restart the server. After that, you can upload the “AddEmployeeDS.dbs” data service and use the try-it tool to play around with it, and test the different validation scenarios.

So in the last few days, I was working on adding distributed transaction support to WSO2 Data Services Server for its upcoming release. We use Apache DBCP for connection pooling, and its XA transaction support doesn’t seem to be well documented. So I thought of sharing some simple steps onto how to do this.

Creating the XADataSource

This is the first thing you will need. In order to retrieve XA 2-phase commit aware XAConnections, you will need to instantiate the respective XADataSource class. For example, in Oracle this is “oracle.jdbc.xa.client.OracleXADataSource”, in MySQL it’s “com.mysql.jdbc.jdbc2.optional.MysqlXADataSource”. These classes always contain a no parameter default constructor, and then you will have to set the respective properties for username/password, connection URL etc..

For example, using Oracle XE, this would be as follows,

OracleXADataSource ds = new oracle.jdbc.xa.client.OracleXADataSource();

These Java bean style properties are usually set by App servers in external configurations, in defining the data sources.

Getting the TransactionManager

The next step is to get yourself a transaction manager. There are many commercial and open source providers for this, one of the transaction managers that I tested which worked well is by Atomikos [1]. I also tried the Bitronix transaction manager, but when using it with DBCP, it threw an exception when it was trying to enlist the respective XAResource of a Connection. The Atomikos transaction manager can be created in the following way.

TransactionManager tm = new com.atomikos.icatch.jta.UserTransactionManager()

DBCP Configuration

Now with the XADataSource and the TransactionManager in hand, we can continue with the creation of DBCP XA connection factories. The key class for this is “org.apache.commons.dbcp.managed.DataSourceXAConnectionFactory”. The following code snippet demonstrate the usage.

DataSourceXAConnectionFactory connectionFactory = new DataSourceXAConnectionFactory(tm, ds);

Here, the connection factory takes in the TransactionManager and the XADataSource objects we created earlier. The following code creates rest of the connection pools and the final pooling DataSource object.

GenericObjectPool pool = new GenericObjectPool();
PoolableConnectionFactory factory = new PoolableConnectionFactory(connectionFactory, pool, null, null, false, true);
ManagedDataSource dataSource = new ManagedDataSource(pool, connectionFactory.getTransactionRegistry());

The final data source object is of type “ManagedDataSource”, this is an important class which derives from “PoolableDataSource”, and takes care of enlisting resources with the transaction manager.

So after all the above is set up, let see of a sample run of the usage of a distributed transaction.

Connection c = dataSource.getConnection();
PreparedStatement stmt = c.prepareStatement("insert into Customers values (customers_seq.nextval, 'XXX')");

So the above code actually just contain a single transaction, but for any connection that is created between begin() and commit() methods will be contained in a single global transaction. And you will notice here that, we simply close the connection we get, and does not run commit() ourselves. This is because, invoking close() doesn’t really close the connection as in the conventional non-XA connections, but this a pseudo-close, which in turn just signals the end of using this connection. The actual committing is done by the transaction manager, when it encounters its “commit()” call.

So what if we want to mix non-XA data sources in an environment where there are XA data sources and where a transaction manager is used to commit the operations. This can be done by using the “LocalXAConnectionFactory” class instead of “DataSourceXAConectionFactory”. By using this, non-XA connections also commits and rollbacks as signalled by the transaction manager. Some sample code on how to create the connection factory is show below.

ConnectionFactory connectionFactory = DriverManagerConnectionFactory(jdbcURL, dbcpProps);
connectionFactory = new LocalXAConnectionFactory(tm, connectionFactory);

In the above code, the dbcpProps are the usual properties you would use in creating a DBCP connection factory, which will include all properties such as, driver, url, username/password, connection pooling properties etc..

So I guess that covers it, have fun with DTP !! ..