While developing web applications, we will come across a lot of scenarios where we should use dynamic columns depending on the entitlement of the user or from the result set depending on the data itself. Dynamic columns in web applications give control to the user on what he wants to see (specially when we have a huge amount of data to work with). For applications with sensitive data, this can be even extended to act as a security layer where the access to specific data can be controlled with high precision.

In this article, I will explain one of the methods to implement this in any J2EE application with little or no code change.

High Level Architecture


In a nut shell, this design uses the Application Context of the container to maintain the values pertaining to a particular user, if the requirement demands not to maintain the preference after a user session is terminated, then it can be achieved by destroying the object stored in the context.

We will start by creating a singleton class, which will be used to store the user preference about the columns. The user preference object can be mapped  against the user id or any other primary key, so that different preferences are maintained for different users. While the container starts, the instance for the singleton will be created. The default preferences can be loaded from a property / xml file or from a data store (DB). This object will contain the preferences of different pages with the different key names so that the same object can be used to maintain the preference across the application. This will be read during the logon operation and if the object in the application context doesn't contain any values (if the user logs in for the first time or in an application where the preference is specific to the session) then the default values are loaded. Once the page loads, the preference can be read from the application context and can be  presented to the user. If the user edits his preferences, it will be updated in the application context. Note that the application context is not persistent between container restarts, so appropriate mechanisms should be taken to store the data.


Let’s go through the implementation now. The following steps describe how to integrate this component to an existing Struts application.

Start-up servlet and initializing the Singleton Class

Create an Initializer Servlet and make an entry for the same in the web.xml file so that the Servlet starts when the container is initialized. Make sure the load-on-startup is set to 1, which ensures that the application server loads the servlet while startup.

  <servlet-name> InitializerServlet </servlet-name> 
  <load-on-startup> 1 </load-on-startup> 

Next, create a Singleton class, which contains getter and setter methods for dynamic column preference, the object can be any collection, we are using Hash Map in this example which will be used to store the primary key against the list containing the preference. The set & get methods in the Singleton should be synchronized so that the simultaneous access is restricted. Also override the clone() method in your singleton.

public class AppSingleton implements Serializable {
	private Hashtable cusomizeViewValues = null; 
	private static AppSingleton appSingleton = null; 
	private AppSingleton (){ }
	public synchronized void setCusomizeViewValues (Hashtable cusomizeViewValues){ 
		this.cusomizeViewValues = cusomizeViewValues; 
	public static synchronized AppSingleton getInstance () throws Exception { 
		try { 
			if (appSingleton == null)
				return new AppSingleton (); 
          } catch (Exception e) { 
			throw new Exception(); 

	public Object clone() throws CloneNotSupportedException{ 
		throw new CloneNotSupportedException(); 

In the startup servlet, create an instance of the singleton class. When created, the object will be available in the application context of the container, and no one will be able to create another instance, until the object created in startup is destroyed. Since we have overridden the clone method, no one will be able to clone the particular object. These measures are to ensure the integrity of the user preference stored in the singleton. A sample Initializer servlet will look like the following code.

public class InitializerServlet extends HttpServlet { 
	public void init () throws ServletException { 
		AppSingleton appSingleton = AppSingleton.getInstance (); 

	public void destroy (){} 
	public void service (HttpServletRequest request, HttpServletResponse response) 
		throws ServletException, IOException {} 

Now, create a Data Transfer Object (DTO) / Value Object (VO) for storing the values. The VO/DTO will contain just two getters and setters, one for the Column Display name and the other for the bean property. This will be a POJO.

Populate the Application context

When the container starts, populate the list in the application context, from the property file or from the data source. If you are having a separate page to choose the columns displayed, you can use the same list to render the values initially. Similarly if the user has changed his preference then update the application context accordingly. This can be done during the Login Action, once the user is authorized and authenticated. You can use your own logic to get all the user preference and then update list with DTO’s/VO’s containing the display name and the property name. This list is updated in the application context against the primary key. Before updating the application context check if the PK is already present in the Hash Table if yes, update or create a new entry.

A sample property file will look like the one given below. By using different keys, we can have entries for different pages. Also the columns to be displayed to the user irrespective of the individual preference can also be marked here under a different key. The columns users are not allowed to modify are added to the rendering list once the request is got from the particular page and not during the logon time. The values are appended to the modifiable columns list and rendered to the user.

Validations$Optional=Plan #, Plan Name, Administrator  
Validations$Core= Plan Val Description, Plan Val Status 
# Optional represents the Columns users can modify 
# Core represents the Columns users can’t modify 
Rendering Logic

Once the values are available in the session, using JSP, logic iterate, render the Column names. Then to display the values from the result set, use the logic iterate with the list containing the values for the page, which is used to render the <tr> tag and inside that logic iterate, use another logic iterate, which is used to render the columns and use a bean define tag to get the column name properties in a scriplet variable and then use a bean define tag to display the value of the property. This logic is highly dynamic.

To display the column names,

<logic:iterate name="<Form Bean Name>" id="testId" 
		property="<Name of the List>" > 
		<bean:write name=" testId " property="<Col Disp Name>"/> 

To display the result set,

<logic:iterate name="<Form Name>" id="outerId"
		 property="<Property of the Hitlist>"> 
		<logic:iterate name="<Form Name>" id="innerId" 
			property="<Name of the List>" > 
		<bean:define name="innerId" id="propId"
			 property="<Col Property>" type="String"/> 
				<bean:write name="outerId" property="<%= propId %>" /> 

We also will have situations to display hyperlinks, textboxes etc. in the result set, the same logic can be used to display the different objects in the JSP. Just before the bean write tag, have a logic equal tag to check for specific types and render the display.

This architecture is highly customizable and can be easily plugged in into any existing J2EE application. Also this can be easily enhanced to incorporate new functionalities.

Published by Venish Joe on Sunday, April 18, 2010

The quick and dirty way to concatenate strings in Java is to use the concatenation operator (+). This will yield a reasonable performance if you need to combine two or three strings (fixed-size). But if you want to concatenate n strings in a loop, the performance degrades in multiples of n. Given that String is immutable, for large number of string concatenation operations, using (+) will give us a worst performance. But how bad ? How StringBuffer, StringBuilder or String.concat() performs if we put them on a performance test ?. This article will try to answer those questions.

We will be using Perf4J to calculate the performance, since this library will give us aggregated performance statistics like mean, minimum, maximum, standard deviation over a set time span. In the code, we will concatenate a string (*) repeatedly 50,000 times and this iteration will be performed 21 times so that we can get a good standard deviation. The following methods will be used to concatenate strings.

And finally we will look at the byte code to see how each of these operations perform. Let’s start building the class. Note that each of the block in the code should be wrapped around the Perf4J library to calculate the performance in each iteration. Let’s define the outer and inner iterations first.

private static final int OUTER_ITERATION=20;
private static final int INNER_ITERATION=50000;

Now let’s implement each of the four methods mentioned in the article. Nothing fancy here, plain implementations of (+), String.concat(), StringBuffer.append() & StringBuilder.append().

String addTestStr = "";
String concatTestStr = "";
StringBuffer concatTestSb = null;
StringBuilder concatTestSbu = null;

for (int outerIndex=0;outerIndex<=OUTER_ITERATION;outerIndex++) {
    StopWatch stopWatch = new LoggingStopWatch("StringAddConcat");
    addTestStr = "";
    for (int innerIndex=0;innerIndex<=INNER_ITERATION;innerIndex++)
 addTestStr += "*";

for (int outerIndex=0;outerIndex<=OUTER_ITERATION;outerIndex++) {
    StopWatch stopWatch = new LoggingStopWatch("StringConcat");
    concatTestStr = "";
    for (int innerIndex=0;innerIndex<=INNER_ITERATION;innerIndex++)
 concatTestStr = concatTestStr.concat("*");

for (int outerIndex=0;outerIndex<=OUTER_ITERATION;outerIndex++) {
    StopWatch stopWatch = new LoggingStopWatch("StringBufferConcat");
    concatTestSb = new StringBuffer();
    for (int innerIndex=0;innerIndex<=INNER_ITERATION;innerIndex++)

for (int outerIndex=0;outerIndex<=OUTER_ITERATION;outerIndex++) {
    StopWatch stopWatch = new LoggingStopWatch("StringBuilderConcat");
    concatTestSbu = new StringBuilder();
    for (int innerIndex=0;innerIndex<=INNER_ITERATION;innerIndex++)

Let’s run this program and generate the performance metrics. I ran this program in a 64-bit OS (Windows 7), 32-bit JVM (7-ea), Core 2 Quad CPU (2.00 GHz) with 4 GB RAM.

The output from the 21 iterations of the program is plotted below.


Well, the results are pretty conclusive and as expected. One interesting point to notice is how better String.concat performs. We all know String is immutable, then how the performance of concat is better. To answer the question we should look at the byte code. I have included the whole byte code in the download package, but let’s have a look at the below snippet.

45: new #7; //class java/lang/StringBuilder
48: dup
49: invokespecial #8; //Method java/lang/StringBuilder."<init>":()V
52: aload_1
53: invokevirtual #9; //Method java/lang/StringBuilder.append:
56: ldc #10; //String *
58: invokevirtual #9; //Method java/lang/StringBuilder.append:
61: invokevirtual #11; //Method java/lang/StringBuilder.toString:()
64: astore_1

This is the byte code for String.concat(), and its clear from this that the String.concat is using StringBuilder for concatenation and the performance should be as good as String Builder. But given that the source object being used is String, we do have some performance loss in String.concat.

So for the simple operations we should use String.concat compared to (+), if we don’t want to create a new instance of StringBuffer/Builder. But for huge operations, we shouldn’t be using the concat operator, as seen in the performance results it will bring down the application to its knees and spike up the CPU utilization. To have the best performance, the clear choice is StringBuilder as long as you do not need thread-safety or synchronization.

The full source code, compiled class & the byte code is available for download in the below link.

Download Source, Class & Byte Code: String_Concatenation _Performance.zip

Published by Venish Joe on Sunday, November 08, 2009

In my previous article about NIO.2, we have seen how to implement a service which monitors a directory recursively for any changes. In this article we will look at another improvement in JDK7 (NIO.2) called FileWatcher. This will allow us to implement a search or index. For example, we can find all the *some_pattern* files in a given directory recursively and (or) delete / copy all the all the *some_pattern* files in a file system. In a nutshell FileWatcher will get us a list of files from a file system based on a pattern which can be processed based on our requirement.

The FileVistor is an interface and our class should implement it. We have two methods before the traversal starts at the directory level & file level, and one method after the traversal is complete, which can be used for clean up or post processing. The important points from the interface is given in the below diagram.


While I think FileVistor is the best way to handle this, JDK7 NIO.2 has given another option to achieve the same, a class named SimpleFileVistor (which implements FileVisitor). It should be self explanatory, a simplified version of FileVisitor. We can extend the SimpileFileVisitor into our class and then traverse the directory with overriding only the methods we need, and if any step fails we will get an IOException.

According to me, FileVisitor is better because it forces you to implement the methods (sure, you can leave them blank) since these methods are really important if you plan to implement recursive delete / copy or work with symbolic links. For example, if you are copying some files to a directory you should make sure that the directory should be created first before copying which can be done in the preVisitDirectory().

The other area of concern is symbolic links and how this will be handled by FileVisitor. This can be achieved using FileVisitOption enums. By default, the symbolic links are not followed so that we are not accidentally deleting any directories in a recursive delete. If you want to handle manually, there are two options FOLLOW_LINKS (follow the links) & DETECT_CYCLES (catch circular references).

If you want to exclude some directory from FileVisitor or if you are looking for a directory or a file in the file system and once you find it you want to stop searching that can be implemented by using the return type of FileVisitor, called FileVisitResult. SKIP_SUBTREE allows us to skip directories & subdirectories. TERMINATE stops the traversing.

The search can be initiated by the walkFileTree() method in Files class. This will take the starting directory (or root directory in your search) as a parameter. You can also define Integer.MAX_VALUE if you want to manually specify the depth. And as mentioned in the above diagram, define FileVisitOption for symbolic links if needed.

Enough with the API description, let's write some sample code to implement what we discussed. We will be using the SimpleFileVisitor so that in our demo we don’t need to implement all the methods.

Let’s start with defining the pattern which needs to be searched for. In this example, we will search for all the *txt file / directory names recursively in any given directory. This can be done with getPathMatcher() in FileSystems

PathMatcher pathMatcher = FileSystems.getDefault().getPathMatcher("glob:" + "*txt*");

Now, let’s initiate the search by calling walkFileTree() as mentioned below. We are not defining anything specific for symbolic links so, by default its NO_FOLLOW.

Files.walkFileTree(Paths.get("D://Search"), fileVisitor);

Let’s go through the implementations of class SimpleFileVisitor, we will be overriding only visitFile() & preVisitDirectory() in this example, but its a good practice to override all the five methods so that we have more control over the search. The implementation is pretty simple, based on the pattern the below methods will search for a directory or file and print the path.

public FileVisitResult visitFile(Path filePath, BasicFileAttributes basicFileAttributes) {        
if (filePath.getName() != null && pathMatcher.matches(filePath.getName()))
    System.out.println("FILE: " + filePath);
return FileVisitResult.CONTINUE;

public FileVisitResult preVisitDirectory(Path directoryPath) {
if (directoryPath.getName() != null && pathMatcher.matches(directoryPath.getName()))
    System.out.println("DIR: " + directoryPath);
return FileVisitResult.CONTINUE;

Once this is completed, we can use the postVisitDirectory() to perform additional tasks or any cleanup if needed. A sample output from my machine is given below.


The complete source code is given below. Please note that you need JDK7 to run this code. I have also given a link to download the compiled class along with source.

Download Source & Class: NIO2_FileVisitor.zip

Complete Source Code.

public class NIO2_FileVisitor extends SimpleFileVisitor<Path> {
    private PathMatcher pathMatcher;
    public FileVisitResult visitFile(Path filePath, 
		BasicFileAttributes basicFileAttributes) {        
        if (filePath.getName() != null && 
            System.out.println("FILE: " + filePath);
        return FileVisitResult.CONTINUE;

    public FileVisitResult preVisitDirectory(Path directoryPath) {
        if (directoryPath.getName() != null && 
            System.out.println("DIR: " + directoryPath);
        return FileVisitResult.CONTINUE;

    public static void main(String[] args) throws IOException {
        NIO2_FileVisitor fileVisitor = new NIO2_FileVisitor();
        fileVisitor.pathMatcher = FileSystems.getDefault().
		getPathMatcher("glob:" + "*txt*");
        Files.walkFileTree(Paths.get("D://Search"), fileVisitor);

Published by Venish Joe on Monday, October 26, 2009

As we know, all the compiled java classes runs inside the JVM. The default class loader from Sun loads the classes into JVM and executes it. This class loader is a part of JVM which loads the compiled byte code to memory. In this article, I will show how to convert a compiled java class to a array of bytes and then load these array of bytes into another class (which can be over the network) and execute the array of bytes.

So the question arises, why should we write a custom class loader ? There are some distinct advantages. Some of them below

  • We can load a class over any network protocol. Since the java class can be converted to a series of numbers (array of bytes), we can use most of the protocols.
  • Load Dynamic classes based on the type of user, especially useful when you want to validate the license of your software over the web and if you are paranoid about the security.
  • More flexible and secure, you can encrypt the byte stream (asymmetric or symmetric) ensuring safer delivery.

For this article we will be creating three classes

  1. JavaClassLoader – The custom class loader which will load the array of bytes and execute. In other words, the client program.
  2. Class2Byte – The Java class which converts any compiled class / object to a array of bytes
  3. ClassLoaderInput – The class which will be converted to array of bytes and transferred

Let’s divide this article into two sections, in the fist section we will convert the java class to array of bytes and in the second section, we will load that array.

Create & Convert the Java class to array of bytes

Let’s write a simple class (ClassLoaderInput) which just prints a line. This is the class which will be converted to a byte array.

public class ClassLoaderInput {
	public void printString() {
		System.out.println("Hello World!");

Now, let’s write another class (Class2Byte) which will convert the ClassLoaderInput to a byte of array. The concept to convert the file is simple, compile the above file and load the class file through input stream and with an offset read and convert the class to bytes and write the output in to another out stream. We need these bytes as a comma separated value, so we will use StringBuffer to add comma between the bytes.

int _offset=0;
int _read=0;

File fileName = new File(args [0]);
InputStream fileInputStream = new FileInputStream(fileName);
FileOutputStream fileOutputStream = new FileOutputStream(args[1]);
PrintStream printStream = new PrintStream(fileOutputStream);
StringBuffer bytesStringBuffer = new StringBuffer();

byte[] byteArray = new byte[(int)fileName.length()];
while (_offset < byteArray.length && 
	(_read=fileInputStream.read(byteArray, _offset, 
	byteArray.length-_offset)) >= 0)
    _offset += _read;    

for (int index = 0; index < byteArray.length; index++)

printStream.print(bytesStringBuffer.length()==0 ? "" : 
	 bytesStringBuffer.substring(0, bytesStringBuffer.length()-1));

Now let’s run this file and generate the output. A sample output from my machine is below.


Now,we have the sample class (ClassLoaderInput) file as a bunch of numbers. Now this bunch of numbers can be transferred over any protocol to our custom class loader which will “reconstruct” the class from these bytes and run it, without any physical trace in the client machine (the array of bytes will be on memory).

Load the array of bytes and execute

Now, to the important part of this article, we are going to write a custom class loader which will load those bunch of numbers (array) and execute them. The array of bytes can be transferred over the network but in this example, we will define it as a string in the class loader for demonstration purpose.

Let’s start by defining the array of bytes.

private int[] data = {-54,-2,-70,-66,0,0,0,51,0,31,10,0,6,0,17,9,0,18,0,19,8,

The conversion of these bytes to class is done by the ClassLoader.defineClass() method We should supply the stream of bytes that make up the class data. The bytes in positions off through off+len-1 should have the format of a valid class file as defined by the Java Virtual Machine Specification. The offset and length will be the additional parameters. Once the defineClass converts the array to class, then we can use reflection to execute the methods in the class.

JavaClassLoader _classLoader = new JavaClassLoader();        
byte[] rawBytes = new byte[_classLoader.data.length];
for (int index = 0; index < rawBytes.length; index++)
    rawBytes[index] = (byte) _classLoader.data[index];
Class regeneratedClass = _classLoader.defineClass(args[0], 
	rawBytes, 0, rawBytes.length);
regeneratedClass.getMethod(args[1], null).invoke(null, new Object[] { args });

Now, let’s compile the class loader and run. The the class file name & method name should be passed as a run time argument. If you have done everything right, you should see the output from the input class which we created (ClassLoaderInput) initially. Sample output from my machine below.



The precompiled classes and the source code can be downloaded from the below location.

Download Source & Class: Java_Dynamic_Class_Byte_Array.zip

Published by Venish Joe on Wednesday, October 21, 2009

Many applications which we use on a day to day basis like a music organizer, file editors monitor the directory for any changes in the files/directories and take appropriate action in the application if there are any changes detected on the fly. Since Java do not have direct access to the system level calls (unless we use JNI, which will make the code platform specific) the only way to monitor any directory is to use a separate thread which will be using a lot of resources (memory & disk I/O) to monitor the changes inside the directory. If we have sub-directories and need a recursive monitor, then the thread becomes more resource intensive.

There was a JSR (Java Specification Request) requested to add / rewrite more I/O APIs for Java platform. This was implemented in JDK 7 as JSR 203 with support for APIs like file system access, scalable asynchronous I/O operations, socket-channel binding and configuration, and multicast datagram's.

JSR 203 is one of the big feature for JDK 7 (Developer Preview is available in java.sun.com) and its been implemented as the second I/O package is java, called as NIO.2. I will be looking into more of these packages in future posts, but in this, I will show how to monitor a directory and its sub-directories for any changes using NIO.2 (JDK 7).

The APIs which we will be using WatchService (A watch service that watches registered objects for changes and events), WatchKey (A token representing the registration of a watchable object with a WatchService) & WatchEvent (An event or a repeated event for an object that is registered with a WatchService) to monitor a directory. So, without further explanation, let’s start working on the code.

Please note that you need JDK 7 to run this program. While writing this post, JDK 7 is available as a EA (Early Access) in Java Early Access Downloads page. Download the JDK and install it.

The first step is to get a directory to monitor. Path is one of the new I/O API as a part of NIO.2 which gives us more control over the I/O. So let’s get the directory to watch, if you want to watch the directory recursively then there should be another boolean flag defined, but in this example we will watch only the parent directory.

Path _directotyToWatch = Paths.get(args[0]);

Now let’s create a Watch service to the above directory and add a key to the service. In the watch key we can define what are all the events we need to look for. In this example we will monitor Create, Delete & Rename/Modify of the files or directories in the path.

WatchService watcherSvc = FileSystems.getDefault().newWatchService();
WatchKey watchKey = _directotyToWatch.register(watcherSvc, 

Now we have all the variables defined. Let’s start a infinite loop to monitor the directory for any changes using WatchEvent. We will poll events in the directory and once some event is triggered (based on the WatchKey definition) we will print the type of event occurred and the name of the file/directory on which the event occurred. Once done, we will reset the watch key.

while (true) {
    for (WatchEvent<?> event: watchKey.pollEvents()) {
        WatchEvent<Path> watchEvent = castEvent(event);
        System.out.println(event.kind().name().toString() + " " 
		+ _directotyToWatch.resolve(watchEvent.context()));

Now to make the WatchEvent <Path> work, we should create a small utility as below ( this is the castEvent which is used in the above code).

static <T> WatchEvent<T> castEvent(WatchEvent<?> event) {
    return (WatchEvent<T>)event;

Now compile the file and give a directory as a runtime parameter while running it. Once the program starts running, start creating some directories/files or modify/rename some files in the directory which you gave as a parameter, the program will start triggering the event and you should be able to watch the modifications in the console. A sample output from my machine is below.


The full source code of the application is given below. You can also download the compiled class and code.

Download Source & Class: JSR203_NIO2_WatchFolder.zip

import java.nio.file.*;
import static java.nio.file.StandardWatchEventKind.*;

static <T> WatchEvent<T> castEvent(WatchEvent<?> event) {
    return (WatchEvent<T>)event;

public static void main (String args[]) throws Exception {
    Path _directotyToWatch = Paths.get(args[0]);
    WatchService watcherSvc = FileSystems.getDefault().newWatchService();
    WatchKey watchKey = _directotyToWatch.register(watcherSvc, 

    while (true) {
        for (WatchEvent<?> event: watchKey.pollEvents()) {
            WatchEvent<Path> watchEvent = castEvent(event);
            System.out.println(event.kind().name().toString() + " " 
		+ _directotyToWatch.resolve(watchEvent.context()));

Published by Venish Joe on Sunday, October 18, 2009