Banking Server Customization

Since April 2008

The ZKA standard standardizes the computerization of banking in Germany; a client does communication by sending different types of files for each order. Responsible for customization of the banking server by providing various plugins based on past expertise in German banking standards, as well as deep knowledge of Java. Also involved a three month visit to the customer site in Munich, Germany to better understand the requirements.

continue reading…

Most people would rather curl up with a hard bound paper book, than read an online version on their screen. Even if the online version was free.

Yet there are so many forms of media online, like embedded short sound clips or video clips, that are an integral part of an online reading experience.


– an encyclopedia of birds, with embedded audio of bird sounds for each bird.
– a pregnancy care or baby care book, with video demos of how to handle each stage of development
– a yoga book with video demos
– manuals of any product with demo of each feature
– medical books with video demo of delicate instructions like surgery

So naturally there’s tremendous potential for a new product – a multimedia book, that has some links in it to access the embedded content.

Its possible with current technology:

An add on audio-visual device that has the multimedia content, and a bar code scanner. The ‘link’ in the book could just be a bar code image, that during reading, one could just swipe the barcode with the device and it instantly plays the audio or video content.

There’s already a scanner available with audio-visual device like a mobile phone (Nokia E71 for example). Here’s a demo, a pretty bad one (idenifying the bar code is a lot easier and faster than shown in this video, probably bcos the guy was distracted in making the video 😉 ) but for the time being…

In my E71 it works like a charm. All it does is show me the hyper link the image is associated with, and I can either close the box or click on it to visit it. But a good enough start!

Some more examples…

Unlike what the probavly outdated video says, its not any future technology.

Have a look at Nokia Mobilecodes that talks about it in more detail, and where you can even create your own bar code images.

So all we really need is a J2ME/mobile application that
– activates the bar code reader software
– reads the resultant hyperlink (actually bar codes can be custom-made, any url can be encoded into a custom unique bar code image)
– connects to the link and plays the media.

Of course, related content needs to be on the site.

Considering that with greater than 2 GB memory cards being so common, we can store the content on the card, and then simply map to it and get it offline without any need for internet connectivity.

And if you consider context-specific content – for example all content relative to only a single book on birds. For example if we just need to think about say 100 sounds made by birds in an encyclopedia, even just numbers would do, rather than globally unique bar code images. This would make complexity of image reading (if bar code creating/reading software is to be implemented) a lot simpler.

E71 costs approx 20K. There might be cheaper alternatives if hardware is built from scratch?

What’s needed:
– miniature camera – even a low res one would do that just takes b&w images
– a display screen & a speaker
– storage card & reader
– some kind of chip that runs the software that does the mapping and playing job

Hmm… there are cell phones costing less than 3K that have a lot of these features – and consider that we don’t even need any reception/transmission… and possibility of mass production might reduce the cost still further…

Here is a stored procedure that replaces customer specific data with random strings. There is a function that generates strings with as many words as specified in the parameter. For example, name will have only two words, whereas address could have 4 words and so on.

This function uses a for loop to iterate through records. A for loop is more expensive than using a cursor, but it is used as this is only a one time operation. customer_prj is the customer table.

Here’s the code!

continue reading…

After all these years of using so much of free software and tools, though I’ve contributed occasionally on and off, finally a time when I actually came up with a product completely from scratch.

Do check it out when you’re free, it might help you become more free! 😉 Feedback/criticism/feature requests welcome – preferably please use the issue tracker to log them. Its just a simple comment form.

Kaala – your personal time keeper

If I wanted to compare only the date parts of the day, ignoring the time parts, it should be simple, right?

If I have two date instances representing “23 Oct 2008 22:41” and “23 Oct 2008 14:22”, and I want to determine that they represent the same day, there should be some method in the JDK to help me, so I thought. But its surprisingly complicated – just not there!

The java Date object is only a wrapper around a long thats the number of milliseconds since the standard base time known as “the epoch“, namely January 1, 1970, 00:00:00 GMT. So it dosen’t really have any day part. The closest we can do is call setHour, setMinute and setSecond to 0 for both the dates in comparison, but this is laborious and also those methods are deprecated.

The Calendar class, which might appear to be better as it has replacements for the deprecated Date methods, still doesn’t offer any seperation of date and time. Even here, one approach to compare would involve zeroing out the hour, minute and second.

Maybe one way to do this would be:

Calendar today = Calendar.getInstance();

		  0,   // Hour
		  0,   // Minute
		  0);  // Second

Calendar today2 = Calendar.getInstance();

		  0,   // Hour
		  0,   // Minute
		  0);  // Second

This would be refactored into some static utility method of course. Anyway, I’d have to hope that today.equals(today2) will return true – only hope because even before trying that out, I’m paranoid about whether the milliseconds component still has some value that will result in inequality! Even otherwise, seperate set methods of the Calendar class can be called to set all the irrelevant components to zero, and then the before or after methods can be conveniently used.

The Apache Commons project includes a utility class that offered a single static call, DateUtils.isSameDay that takes two Date or Calendar objects, and compares only the date parts!

I wondered how they did it and dug up the source* …

    public static boolean isSameDay(Date date1, Date date2) {
        if (date1 == null || date2 == null) {
            throw new IllegalArgumentException("The date must not be null");
        Calendar cal1 = Calendar.getInstance();
        Calendar cal2 = Calendar.getInstance();
        return isSameDay(cal1, cal2);

 public static boolean isSameDay(Calendar cal1, Calendar cal2) {
        if (cal1 == null || cal2 == null) {
            throw new IllegalArgumentException("The date must not be null");
        return (cal1.get(Calendar.ERA) == cal2.get(Calendar.ERA) &&
                cal1.get(Calendar.YEAR) == cal2.get(Calendar.YEAR) &&
                cal1.get(Calendar.DAY_OF_YEAR) == cal2.get(Calendar.DAY_OF_YEAR));

Some additional useful methods could be isEarlierDay, isLaterDay, where the == would be substituted by > or < operators.

* source is also easily viewable using JadClipse in conjunction with JAD, but it wouldn’t show the static variable names like Calendar.ERA but just put in the numeric value.

When after many years of development, I first came across the test driven approach a couple of years ago, it didn’t make immediate sense. The main drawback seemed to be that there was just not enough time to write the tests. But over time, the key benefits I’ve seen are that it really documents the requirements at a coding level, and it also builds in a looser coupling among modules. And of course automated regression testing is just fantastic – with a well written test suite, one can be reasonably confident that any changes in one module hasn’t adversely affected dependent modules right after a build! (and a test suite is executed as a post-build step!)

On the other hand, I’ve found it pretty challenging as well in many situations – esp in UI development I usually gave up. Sometimes I felt that building up the mock environment to test one particular module just took too much effort to be worth it. Its pretty tempting to bypass it and just get on with development! 😉 But once we see the benefits, it just becomes a matter of discipline for any good developer to start absorbing and cultivating this methodology!

Here is some more info from the web…

Test-driven development (TDD) is a software development technique that uses short development iterations based on pre-written test cases that define desired improvements or new functions. Each iteration produces code necessary to pass that iteration’s tests. Finally, the programmer or team refactors the code to accommodate changes. A key TDD concept is that preparing tests before coding facilitates rapid feedback changes. Note that test-driven development is a software design method, not merely a method of testing.

~ Wiki on TDD

Test-driven development (TDD) is an advanced technique of using automated unit tests to drive the design of software and force decoupling of dependencies. The result of using this practice is a comprehensive suite of unit tests that can be run at any time to provide feedback that the software is still working. This technique is heavily emphasized by those using Agile development methodologies. In order to use this technique with Visual Studio Team System, you must understand some other topics:

~ Guidelines for Test-Driven Development – from Microsoft MSDN

And from the same link, here are some of the benefits…

  • The suite of unit tests provides constant feedback that each component is still working.
  • The unit tests act as documentation that cannot go out-of-date, unlike separate documentation, which can and frequently does.
  • When the test passes and the production code is refactored to remove duplication, it is clear that the code is finished, and the developer can move on to a new test.
  • Test-driven development forces critical analysis and design because the developer cannot create the production code without truly understanding what the desired result should be and how to test it.
  • The software tends to be better designed, that is, loosely coupled and easily maintainable, because the developer is free to make design decisions and refactor at any time with confidence that the software is still working. This confidence is gained by running the tests. The need for a design pattern may emerge, and the code can be changed at that time.
  • The test suite acts as a regression safety net on bugs: If a bug is found, the developer should create a test to reveal the bug and then modify the production code so that the bug goes away and all other tests still pass. On each successive test run, all previous bug fixes are verified.
  • Reduced debugging time!

This gives a nice diagram to explain it better: Introduction to Test Driven Design (TDD)

Further links…

Java Testing

Windows/.NET Testing

My encounter with maven was all backwards – I started with maven2, and then was compelled to learn the earlier version due to the next project where there was a bit of reluctance to upgrade due to time pressure, and finally I came to know more about Ant as well! 🙂 But let me put this post in the right order here. This is meant to be a crash course just to give the big picture to anyone not familiar with these concepts. The links provide a lot more information, and feel free to ask for any further information.

What is a build utility? I found this description in wiki…

the user specifies what will be used (typically source code files) and what the result should be (typically an application), but the utility itself decides what to do and the order in which to do it.

The simplest compilarion with Java development starts from the command line – using the javac command to compile a single java source file, and then uses java to execute the resulting class file. Then there are IDEs that provide project structures and menus for this build process, which are very IDE centric, and not much scope of automation (we need to launch the IDE to build the project).

And then with increased complexity, we need to add other dependent jars into the class path to start with. Going further we’d need to zip the resulting class files into a jar file, various other possible steps like copying configuration files or adding manifest files. With the increase in tapping into advantages of test driven development, one would like to automatically run regression tests after a build to ensure integrity of the latest development changes.

True, such additional tasks can be managed with command line .bat (or .sh in Unix) shell scripts to a certain extent. But then as one moves on from from project to project, the way source files are organised and the batch files are written keep changing, and this is a big burden to the maintainability of the project.

So going beyond shell commands like make, Apache’s Ant went one step further to improve the build process:

Instead of a model where it is extended with shell-based commands, Ant is extended using Java classes. Instead of writing shell commands, the configuration files are XML-based, calling out a target tree where various tasks get executed. Each task is run by an object that implements a particular Task interface.

An ant task is a single command with parameters, for example – for an ant task to copy a file the parameters would be source file, destination file, and overwrite true or false.

But another Apache project, Maven, was a major milestone in the build process, as it standardised this whole approach to build management. It introduced conventions on how a project and its various artifacts are to be structured.

We wanted a standard way to build the projects, a clear definition of what the project consisted of, an easy way to publish project information and a way to share JARs across several projects.

So a maven project typically has the following structure:

There’s a src\main folder that contains the java and resources sub folder.

¦   +---main
¦       +---java
¦       ¦     +---com
¦       ¦           +---sm
¦       ¦                 +---data
¦       ¦                 +---util
¦       +---resources
¦           +---conf

The output goes under the target\classes folder.

There are three files that are associated with a maven 1.x project.

– project.xml
– maven.xml

(Note that they are replaced with just one file pom.xml in maven 2.0)

The first one is the main project file, the second two are optional. The project.xml contains the details of the project, the dependencies of the project, the source and resources directory location.

Before maven, dependent jar files were placed in a lib directory or tree structure (which might be maintained under source control). Now maven introduces the concept of a repository, where the libraries are stored. All references to any dependency in the project.xml file is from this repository.

There are centrally maintained searchable repositories, or one may maintain a repository privately, say within an organisation, to host proprietary jar files. One can use repository server like artifactory. In either case, each development machine will have a local repository that will typically be under the user profile folder – on my Windows machine its C:\Users\sanjay\.maven This contains a cache (internal to maven) and repository folder where all the jar files go, which looks like this…

    ¦   +---jars
    ¦   +---jars
    ¦   +---jars
    ¦   +---jars
    ¦   +---jars
    ¦   +---jars
    ¦   +---jars
    ¦   +---jars
    ¦   +---jars

java-sources and javadoc.jars may also be present as shown in the last directory, but the main thing is the jars directory which contains the jar files.

This structure was for maven 1.x. Maven 2.x revamped a lot of things including the repository directory structure. However the main point is just to know that that’s where the jar files go.

Now maven builds the project such that the referenced projects are added to the class path.

There is no need to mess around with CLASSPATH environment variable any more.

A build process has a goal, a typical goal being the resulting binary file i.e. JAR file after compilation. maven has several pre-defined goals, including compile, jar and clean.

To just create a jar file, project.xml alone is enough, but one can do more refined build or post-build steps by specifying them in maven.xml file using Ant tasks. is a typical properties file (nave value pair) that contains some properties that are used during the build process.

maven needs to be installed, and once installed and the bin directory is added to the system path, one can run the maven command at the project root to build the project.

maven2 is a more sophisticated upgrade. There’s just a single pom.xml file – one can upgrade the existing project.xml file to the new structure by using a convert:one plugin. Since maven 1.x is no longer supported its a pretty good reason to upgrade!

Some further reading…

Integration with eclipse – “maven eclipse” command generates a neat eclipse project file, where all the maven dependencies are added to the project class path.

However with m2eclipse, a plugin for eclipse, this is even further simplified! But somehow I preferred the command line approach even inspite of the plugin.

Another thing is the set up of a maven repository. After going through this comparison of different possible aproaches, I was able to set up artifactory quite easily.

Its strongly recommended that any organisation sets up a maven repository instead of maintaining jar files in some kind of a directory structure!

Here are some of the benefits

  • Consistency in artifact naming
  • quick project setup, no complicated build.xml files, just a POM and go
  • all developers in a project use the same jar dependencies due to centralized POM.
  • Shared build meme. I know how to build any maven project
  • reduce the size of source distributions, because jars can be pulled from a central location
  • 99% of my needs are available out of the box,

These people have worked in this domain over the decades, and have distilled their experience and best practices together, and come up with something that works really well for everybody. At the same time its not a rigid set of rules, and there’s enough flexibility for customization, that’s the real beauty of maven.

In the Microsoft world, we use msbuild and nant.

Usually we’re used to white background screens in most Windows applications. This post Join the Dark Side of Visual Studio gives a different perspective of a dark background with white text on it. He compares a screen with a lightbulb and even has photographs to illustrate the point.

Most people who see it for the first time are offended by it… but if you think about it, it really makes sense. It brings balance to the force.

The default scheme sports a bright white background color with dark text over it. But monitors these days are brighter than ever. You’re presumably a programmer, so you’ve no doubt had those late but productive coding nights, nights that are lit by only the glow of your monitor. The glow is bright enough to light up the room and cast shadows. Not unlike… a light bulb.

So there you are, staring straight into a strong light source, looking for the few pixels on it which are not illuminated. Can you read the wattage and manufacturer letters on the head a light bulb while it’s turned on? Ahhh… but what if the bulb were black, and only the letters on it were illuminated?

This bulb has no markings, but you’d bet they’d show up nice and bright and easy to read in the right image. Another benefit someone pointed out to me once — if you’re on a laptop, it saves your battery life! Horray for an extra 20 minutes of mobile coding!

It seems to me the only reason a black-on-white background is so standard is because the GUI was invented to be an analogy to pen and paper. Paper is white. Your screen doesn’t have to be. Don’t conform to the status quo! Plus, it just looks really cool… I think.

He also provides settings to enable a dark background to Visual Studio.NET which I’d tried and quite liked though with a few tweaks of my own. But I had to revert back to the original as it didn’t go very well with all fellow programmers during paired programming sessions. Another developer Gowri Kumar felt that browsing and programming aren’t the same, case in point being that google’s black version blackle isn’t an official page by google themselves. I find an occasional dark background easy on the eyes. Currently I’m working mostly on Eclipse and though its not as simple as changing a theme, there still happens to be a nice exported preference available for Eclipse as well including a link to a very readable Monaco font!


One of the projects required a command line parameter for a date.

The date could either be a simple fixed date in dd.MM.yyyy format. It could also be an expression containing a formula that had the difference in days from today, for example: $TODAY – 1 for yesterday, or $TODAY + 1 for tomorrow.

The evaluated date be rendered in a particular String format, for example yyyyMMdd.

So we decided to have an expression like the following…

$DATE(date, ‘format’)

Possible values for the command line parameter would then be:


Further examples: double quotes are needed if spaces are included

from=”$DATE($TODAY – 1, ‘dd-M-yyyy’)”
from=”$DATE(12.2.2009, ‘dd-M-yy’)”
from=”$DATE(2.3.2009, ‘dd (M) yy’)”
from=”$DATE($Today + 3, ‘yyyyMMdd’)”


It seemed like something was this was already there somewhere, but could not find anything that came close on google. This can be done with a laborious string parsing algorithm, but its better to have a more elegant, flexible solution than that. This is generally the kind of case where a regular expression would come to the rescue! Googling can come up with a lot of useful resources for learning about regular expressions, and this Eclipse plugin for Regular Expression testing was really useful for testing what one learns. So after some trial and error, the following evolved…

// Actual expression is
// \$DATE\s*\(\s*((\$TODAY\s*(?:([-|\+])\s*(\d+))?)|((0?[1-9]|[12][0-9]|3[01]).(0?[1-9]|1[012]).(19|20\d\d)))\s*,\s*\'((.+))\'\s*\
// The double \\ below is for the Java string escape character
private final String DateExpr =

Parts of the expression explained:

  • \$ shows that $ is not to be treated as a special character, but part of the string ($Date)
  • \s is for space
  • * is for zero or more occurances, which allows for optional spaces in between.
  • () encloses a group. Groups can be ‘captured’. A captured group is an evaluated sub-part, which we will see shortly. A group that we’re not interested in capturing – a non-capturing group – is indicated with a ?: after the opening brace – (?: )
  • [0-9] is a digit that’s within the range, in general [ ] encloses a character class.
  • ? indicates one possible occurence
  • | is OR – so [-|\+] means – or + (here again \+ indicates that + is only to be interpreted as part of the string)

The Pattern class in the JDK API details all possibilities.

Now that we have the expression, we can have a pre-compiled version of it:

	private static final Pattern pattern
	     = Pattern.compile(DateExpr, Pattern.CASE_INSENSITIVE);

We can also store the number of expected groups, just for validation, though maybe this wouldn’t really be necessary and can be skipped, its just that I’m being paranoid here.

	private static final int DateExprGroupCount = 10;

Finally, here’s the function that takes the expression, and returns the evaluated string. MyException below is just a custom exception that does the logging as well, and this can be replaced with any application defined exception.

Its a lot simpler than what it looks like, just that there’s some validation and a lot of logging that can be removed once you know that its working. Also, log.debug can be replaced with printlns if log4j isn’t being used.

public static String evalExpr(String expr) {

	String value = expr;

	// currently only date expression is supported
	if (!expr.toUpperCase().startsWith("$DATE"))
		return expr;

	Matcher matcher = pattern.matcher(expr);

	if (matcher.find()) {
		log.debug("Could find a match for expression: " +;
		log.debug("Group count: " + matcher.groupCount());

		if (matcher.groupCount() != DateExprGroupCount)
			throw new MyException(
			     "Date Parameter Parsing error - group count does not match expected "
			     + DateExprGroupCount, log);

		// Looping through just for debugging and logging purpose
		for (int i = 0; i < matcher.groupCount(); i++)
			log.debug("RegEx group " + i + ": [" + + "]");
	else {
		throw new MyException(
		"Date Parameter Parsing error - does not match expected pattern", log);

	//	all is fine so far, extract the following values:
	//	Date Value, and Date Format

	Date day = null;

	DateFormat outFormat = new SimpleDateFormat(;

	if ("$TODAY")) {

		Calendar date = Calendar.getInstance();

		date.set(Calendar.HOUR_OF_DAY, 0);
		date.set(Calendar.MINUTE, 0);
		date.set(Calendar.SECOND, 0);

		//	Operator is + or -
		String opr =;

		if (opr != null) {
			String incStr =;

			int inc;
			try {
				inc = Integer.parseInt(incStr);
			} catch (Exception e) {
				throw new MyException("Reading Filter Parameter", e, log);

			if (opr.equals("-"))
				inc = -inc;

			date.add(Calendar.DAY_OF_MONTH, inc);

		day = date.getTime();
	else {  //	expected numerical date

		DateFormat inFormat = new SimpleDateFormat("dd.MM.yyyy");

		try {
			day = inFormat.parse(;
		} catch (ParseException e) {
			throw new MyException("Reading Filter Parameter", e, log);

	value = outFormat.format(day);

	log.debug("Resultant date string: " + value);

	return value;

To take it out for a spin, try out the following…

public static void main(String[] args) {
	System.out.println(evalExpr("$DATE($TODAY - 394, 'yyyyMMdd-HHmmss')"));
	System.out.println(evalExpr("$DATE(1.2.2009, 'yyyyMMdd')"));

If we don’t need the date formatting but just something like $TODAY + 1, it gets much simpler:

private final String TodayExpr =
private final Pattern pattern
	= Pattern.compile(TodayExpr, Pattern.CASE_INSENSITIVE);

And the evaluation function can return a Date object:

public Date evalExpr(String expr) {

	Date dateVal = null;

	// currently only date expression is supported
	if (!expr.toUpperCase().startsWith("$TODAY")) {
		try {
			dateVal = DateFormat.parse(expr);
		} catch (ParseException e) {
			throw new RuntimeException("Reading Parameter" + e);

		return dateVal;

	Matcher matcher = pattern.matcher(expr);

	if (matcher.find()) {
		log.debug("Could find a match for date expression: " +;
		log.debug("Group count: " + matcher.groupCount());

		if (matcher.groupCount() != 3)
			throw new RuntimeException(
			"Date Parameter Parsing error - group count does not match.");

		// Looping through just for debugging and logging purpose
		for (int i = 0; i < matcher.groupCount(); i++)
			log.debug("RegEx group " + i + ": [" + + "]");
	else {
		throw new RuntimeException(
		"Date Parameter Parsing error - does not match expected pattern");

	//	all is fine so far, extract the following values:
	//	Date Value, and Date Format

	Calendar date = Calendar.getInstance();

	date.set(Calendar.HOUR_OF_DAY, 0);
	date.set(Calendar.MINUTE, 0);
	date.set(Calendar.SECOND, 0);

	//	Operator is + or -
	String opr =;

	if (opr != null) {
		String incStr =;

		int inc = 0;
		try {
			inc = Integer.parseInt(incStr);
		} catch (Exception e) {
			throw new RuntimeException("Reading Increment Parameter" + e);

		if (opr.equals("-"))
			inc = -inc;

		date.add(Calendar.DAY_OF_MONTH, inc);

	dateVal = date.getTime();

	log.debug("Resultant date string: " + dateVal);

	return dateVal;

These were written seperately, though its quite possible that these two can be refactored to work together if required.

There seems to be nothing readymade equivalent to Microsoft’s SQL Profiler that allows us to monitor SQL statements executing at the database. There are a lot of sites that talk about using v$session view, using a complex SQL query, for example this one. All I wanted to do was look at the exact query sent from my application. v$sql has a huge amount of all kinds of SQL even internal to Oracle system databases, hence some filtering is needed. I found that the following simple query worked well for me:

select sql_text from v$sql
where UPPER(sql_text) like '%CSV_%'
order by last_load_time desc

Replace the like condition with the name of some table that you know. In case all your tables have a particular prefix, you could use that prefix and get every query that was related to your application. More complex tracing can be done using SQL_TRACE and enabling some profiling.