Eclipse – Data Assist in tooltip

While debugging in eclipse, hovering is not showing the real data (value) of parameter and annoying developer by showing method type. Hence developer needs to go debug screen (or right click -> Display) to see value of a variable while in debug mode.

I need to see value on hovering mouse on a variable rather than right click or change eclipse perspective.

For that: Go to Preferences -> Search Hover: You will see following default screen:

Screen Shot 2015-02-16 at 11.54.26 AM

Now un-select “Combined Hover” option and clean out “Source” option. Remember you need to keep “Source” option selected. Apply and Save.

Screen Shot 2015-02-16 at 11.56.11 AM

Great, now you can see respective value of a variable in tool tip. Also if you hover on a method, you can see detail implementation of method.

Setting Java Cryptography Extension (JCE) Unlimited Strength

Default java installation is not coming with high encryption extension files. These files are valid for USA country and are need to be downloaded and installed separately:

1. Download the JCE Policy jar files from the below location:

2. The zip file would contain two jar files (local_policy.jar and US_export_policy.jar).

3. These jar files need to be placed under the ‘jre/lib/security’ location. For my MAC, this location is as follow:


Enjoy strong encryption algorithm now :)

Installing Ubantu using VirtualBox

I need to have separate VM to test my active-active use cases, that need multiple VMs to simulate multiple data centers requirement.

Installing Oracle Virtual box and ubantu is quite simple, though there is minor issue in configuring network and resolution.

Just note down steps:

    • Download Ubuntu (64 bit) from “”, Save .iso’ image file in your disk.
    • Download Oracle VirtualBox from
    • Most of configuration are intuitive and can be re-configured later. Select default setting.
    • On start of VM, it will ask for operating System, browse to downloaded image (.iso) file.
    • Start VM and install Ubantu from stating menu
    • Select “something else” on stating process
    • While installation, create ext4 partition. Select approx 7GB space allocation (less than memory available) and mount on root (“/”) directory.
    • Type ALT+Ctrl+T to get command promt
    • sudo apt-get install default-jdk (It install open JDK version 7u71). You can install other versions as per requirement of project
    • To configure networks,
      • select ‘network’ on Virtaulbox panel > Adaptor 1 > Bridge Adaptor > xxxx Gigsbit Network Connection Card   (for eathernet)
      • Again Adaptor 1 > Bridge Adaptor > xxx centrino Advanced (Wireless)
    • Chage user to ‘root’ by ‘sudo -i”
    • gedit /etc/network/interfaces
    • Add following to DHCP

auto eth0
iface eth0 inet dhcp

    • Add following for static

auto eth0
iface eth0 inet static

    • Release and reconnect connection to eth0

      sudo ifdown eth0

      sudo ifup eth0

    • sudo service resolvconf restart
    • Update resolution of screen

sudo apt-get install virtualbox-guest-dkms

    • restart Your VM
    • Install Tomcat, if there is a need


    • Configure .bashrc for java and tomcat path

sudo nano ~/.bashrc


SSL enabled TCP Trace

I was facing an issue while taking TCP trace on client end for SSL enabled server, encrypted trace :)

I like using Grinder for TCP dump by running TCP Proxy on different port.

Following is the way, you can generate readable (decrypt) trace:

1. Using Grinder:

java -cp grinder/grinder.jar -Xms16m -Xmx32m net.grinder.TCPProxy -localhost localhost -localport 9090 -remotehost  <Server Address> -remoteport 443 -keystore myserverJKS.jks -keyStorePassword abcd1234 -ssl

Run above command out of your grinder package (or change jar file path accordingly)

Note “-ssl” keyword in command.

2. Using Wireshark

Wireshark is great and can easily be configured for SSL based TCP traces:

– If you do not have .key file, regenerate it from your JKS and password.

keytool -certreq -alias <domain-name> -keystore <jks file name> -file myKey.key

It will ask your jks password, it should be same as it is used to generate JKS.

– Open Wireshark, Edit > Preferences > Protocols > SSL > (Pre)-Master-Secret log filename > select .key file

– Start TCP trace

Use efficient steaming to upload your files to server

I was trying to figure out if I can upload files using streaming process.

What is streaming process: byte by byte upload.

Advantage: On interruption on upload process due to bad/slow network connection, you do not need to send bytes again that has already been sent to server/storage disk.

Disadvantage: You need to send each byte.

Now let’s choose the middle way, by using buffers. We can load buffers with bytes and send to storage/server when buffers are full.

Let’s see the attached code:

final byte[] bytesRead = new byte[bufferSize];
int noOfBytesRead = 0;
long totalNoOfBytesRead = 0;
long endOffset = 0;
long endOffset1 = 0;
long beginOffset = startingOffset; // if we have any starting offset, else start with 0
final ByteArrayOutputStream baos = new ByteArrayOutputStream(bufferSize);

while ((noOfBytesRead = != -1) {
endOffset = totalNoOfBytesRead – 1;

final byte[] bytesToStorage = new byte[noOfBytesRead];

System.arraycopy(bytesRead, 0, bytesToStorage, 0, noOfBytesRead);

if (baos.toByteArray().length > bufferSize) {
endOffset1 = (beginOffset + baos.toByteArray().length) – 1;
uploadBytes(baos.toByteArray(), beginOffset, endOffset1);
beginOffset = endOffset1 + 1;
baos.write(bytesToStorage, 0, bytesToStorage.length);


endOffset = endOffset + startingOffset; // If there is starting offset, else no need.

if ((baos.toByteArray().length != 0) && (baos.toByteArray().length <= bufferSize)) {
uploadBytes(baos.toByteArray(), beginOffset, endOffset);

Simple use of buffer: Few things to remember-

  1. Keep reading your bytes unless we have reached to end of file.
  2. Load data in buffer, for that define ByteArrayOutputStream to copy data from input stream.
  3. Calculate begin and end offset positions as we are moving bytes along
  4. Keep checking buffer size, if is more than defined limit, send all bytes to server (storage) with start and end offset and clean buffer.
  5. In last section (After while loop), if we have reached to end of file (bytes == -1), send remaining bytes to server/storage with correct starting and end offset.

Above method can be optimized a bit, but it works well for us, hence I keep myself stick to it.

Rotating Tomcat Catalina.out file

In some cases, tomcat prints huge logs to catalina.out file and some time quite redundant. Problem comes when size of file is constantly increasing and creates panic alarm for disk space (if logs are not configured to separate partition/mounted-disks)

You need to configure 3 things for implementing and configuring logrotate

1. /etc/cron.daily/ folder – contains logrotate file, to be executed on daily basis by loading configuration file ‘logrorate.conf’ in same folder

2. /etc/logrotate.conf file- Configuration file for cron job. It has entries of all configuration to be rotated on daily basis.

3. /etc/logrotate.d  folder – logrotate.conf loads all configuration files from this folder.

Hence, to do a log rotation configuration, you can copy a configuration file in ‘logrotate.d’ folder OR directly add entries of configuration in ‘logrotate.conf’ file.

What is the configuration:

Create new file (any name) in ‘logrorate.d’ folder.

<path-to-log-folder>/catalina.out {
rotate 7
size 10M

‘logrotate’ has many useful features that can be checked by ‘man logrotate’ @ unix machine. Few are:

  • copy – Make a copy of the log file, but don’t change the original at all.
  • mail <email@address> – When a log is rotated out-of-existence, it is mailed to address.
  • olddir <directory> – Logs are moved into <directory> for rotation.
  • postrotate/endscript – The lines between postrotate and endscript are executed after the log file is rotated.

A quick note, I was facing this issue while doing load test, where few external jars printing console output (and I need it), my catalina log is increasing  and making environment unresponsive because of disk space. I tried to configure it to ‘hourly’ basis to clean up logs every hour but ‘logrotate’ does not support ‘hourly’ configuration.

I did following steps:

  • Create separate configuration file of my (like logrotate_tomcat.conf) with out impacting other cron configuration.
  • Copy ‘logrotate’ file to ‘cron.hourly’ folder.
  • Update logrotate file to load my new configuration.




A simple Mongo based Web application

I was watching tutorial on MongoDB Mj101, they have create a simple web application using following technical components:

1. Spark Java embedded platform (for web server) –

2. Free Marker (front end page designing tool)

3. MongoDb – Backend database

Lets start with simple web server by exposing simple String:

1. Add repository in pom.xml file:

<id>Spark repository</id>

2. Create a simple Java file:

import static spark.Spark.*;
import spark.*;

public class HelloWorld {

public static void main(String[] args) {
get(new Route(“/hello”) {
public Object handle(Request request, Response response) {
return “Hello World!”;

we can access this by using URL – http://localhost:4567/hello

There is no XML and properties configuration to start a web application.

3. Integrate it will Free Marker

Add Repository:


Update java code:

final Configuration configuration = new Configuration();

configuration.setClassForTemplateLoading(Week1Homework4.class, “/”); – “/” path is relative your class file in first parameter.

Create template:

<title>The Answer</title>
<h1>The answer is: ${answer}</h1>

Fill the template with Java Map where keys are already defined placeholders on html page

Template helloTemplate = configuration.getTemplate(“answer.ftl”)

Map<String, String> answerMap = new HashMap<String, String>();

answerMap.put(“answer”, Integer.toString(answer));

helloTemplate.process(answerMap, writer); // Write may be any StringBuffer/StringWriter

return writer;

4. Combining MongoDB with FreeMarker

The great advantage of Freemarker is that it is using Map for name and value pair for data presentation. However MangoDB’s BasicDBObject is also a implementation of Map to represent JSON format data.

Hence passing directly BasicDBObject will replace place holder tags on Free marker template with respective value from MongoDB.

Video is available:


Get every new post delivered to your Inbox.