Nov 17, 2010

Öredev 2010 - Some Thoughts

Öredev 2010 is over and it is time for me write some words about the conference. What I like about the conference is that has an interesting mix of tracks; Java, .NET, Agile, Web Development etc. It is always nice to broaden your views. New this year was the Xtra(ck). The Xtra(ck) contained sessions that had nothing to with software development, or at least very far from it. For example there was a session called "Understand Hypnosis". These session was definitely a way to broaden your mind. I liked the "Photographic Composition and Creativity" and the photo walk both with Amy Archer. Thanks for the tips during the photo walk!

Also new for this year was the Öredev puzzle application. The application was designed around the concept of being social. Each participant got a puzzle code within the application. By sharing your code you got a piece of the puzzle. As a bonus you got the contact information for the person whose code you entered. Each person to finish the puzzle got an Öredev t-shirt and was participating to win some great prizes. The application was available for Blackberry, Android, Windows Phone 7 and iPhone. The application also contained the schedule and speaker bios. A real nice application. Next year I hope to see an improved version. For example, integrating your own schedule would be great.

And here are some photos from the conference. Enjoy!

Öredev 2010 - Get Real

Speaker Dinner at Malmö City Hall

Developers Competing

Hard to Choose the Righr Track?

Even Doll showing of Some Nice Xcode Tricks

Coffe Lounge

Jack Nutting about Making Money in Appstore

John Seddon

Marcus Zarra on Core Graphics

The Audience
Library by Night

And that is the wrap. See you next year!

Oct 25, 2010

Network Programming Tips for Mobile Developers (iPhone, Android, Java ME etc)

Among the most important things to master for a mobile application developer is network programming. Every mobile application I have developed has involved some kind of network communication. During the year I have learned a thing or two. Read on to get some tips & trick from me.

Quite often the server is developed alongside the client. This means that you as a client developer have to wait for those awfully slow server programmers. Not every server programmer is slow, but most likely you have to wait for some server functionality. In situations like these, it is handy to roll your own server. But as a client developer, you do not want to spend an massive amount of time to setup a server. In fact, if it takes longer than a couple of minutes most client developers give up. What you need is a server that is very quick and easy to setup. Also it needs to be simple, yet powerful. Many server programmers would recommend you setting up a Tomcat server. The advantage of using a Tomcat server is that it is very versatile. But I do not really like Tomcat. It is to advanced for me. Another solution is to use a Jetty server. This is simpler to setup than Tomcat, but yet rather powerful. It could be executed from Maven. As such is is convenient to use for automated tests. Maven takes cares most of the work, including starting and stopping your Jetty server. But there is a new and rising star, the Sinatra server. The Sinatra server is actually a Ruby library. You use Ruby to program the behavior of your server.   

A simple “Hello World” implementation for Sinatra looks like the one below (from the Sinatra Book).

require 'rubygems'
require 'sinatra'

get '/' do
  "Hello world, it's #{} at the server!"

The file is saved as a file with .rb extension, in our example we save it as “hello.rb”. Then you start your server as simple as this:

ruby hello.rb

You get a nice line saying that Sinatra has taken the stage and that your server uses port 4567. This could of course be changed, if you want to mimic another server without changing your code. It is very easy to extend your server functionality. Ruby is easy to learn and powerful for making your own server without any big hazzle. Take a look at the “Sinatra Book” if you want to master Sinatra.

Another common scenario is that you need to figure out what happens when you make a certain URL request, for example if you do a REST request. Before even writing a single of code, you could use the cURL command line tool. Its available on most platforms, like Unix, Linux, Mac OS, Windows etc. For a matter of discussion, let us assume that you want to check that you programmed your Sinatra web server correctly. Then you the following command:


The response should look like this:

Hello world, it's Mon Oct 25 20:44:19 +0200 2010 at the server!

So now you know how to implement your own simple server, as well as how to debug your server request using curl. But wait, there is even more tricks I want to share with you. I hope that you feel like reading a little bit more.

I think that XML is a rather misused technologies around. It is used for many things, ranging from describing your builds (Ant, Maven, etc) to describing serialized objects traveling through cyberspace (SOAP, REST etc). When SOAP was introduced, one main argument for XML is that it human readable. What? Have you ever seen a SOAP request that is human readable? If you are about to send and/or receive objects there are much more suitable technologies than XML. Especially when making a mobile client, where XML parsing could take to much time and memory, it is important to understand that there are good alternatives. One good old technology is ASN.1, that is hugely underestimated. It was designed for communicating data between different architectures and CPUs. It is fast even on a 8-bit CPU. The biggest drawback is that it is not widely supported and it requires an ASN.1 compiler. However you could implement your own ASN.1 encoder/decoder quite easy. Another solution that is easier to use, but building on the same principles as ASN.1 is the Hessian protocol. It is a binary web service protocol. The specification is originally designed by Caucho, who did the Resin web server. The specification is open and implemented in many languages, including .NET, Flash, Ruby, Python, Objective-C etc. I have primarily used it for Java ME, where only a subset is implemented. If you use it in Java ME, I would recommend considering using it to store object data in the record store. But now it is more relevant for me to use it on Android or iPhone. The Objective-C variant for iPhone is called HessianKit. It is open source and released under Apache 2.0 license. Thus it is not a viral open source license, which I think is great. I will not describe how to use it, since there already is a good article on the subject “HessianKit Released”. I hope that you will consider using Hessian if you are in the position to decide what web service protocol to use. If you feel the urge to use XML for your web services, you could use Burlap which is the XML version of Hessian. The communication is as simple as it could get using XML.

Another useful tool is a network analyzer. This is good for finding out what happens between the client and server. For example, if you want to take a look at the headers are many times auto generated. I have used Wireshark with great success. It would not say it is easy to use, but when you need to use it is priceless.

These are the tools that I think I use the most for network programming. What are your best tools when doing network programming?

Oct 8, 2010

Synchronizing iPhone 4 with Google without iTunes

Today I received a brand new iPhone 4. Life was joyful. However it gave me some frustrating moments before I could use it for real. Here is the true story what happened when I got my iPhone 4.

I had prepared the migration by backing up my contacts from my HTC Hero. The HTC had no way to export contacts to a SIM card, but only to SD card. Strangely enough it was possible to import contacts from a SIM card. Anyway I did not consider this to be a big problem, since exporting to the SD card should work fine.The contacts was saved as a VCF file. This was easily imported by double-clicking on the file. When this was done, all the google contacts was on my Mac Book. After I started and initialized the iPhone, it was connected via a USB cable to the Mac Book. Consequently iTunes was started with a wizard. One of the steps was the sync setup. By default all contacts was to be synced. Perfect! This was exactly what I wanted. A couple of more steps and I pressed the “Sync”-button.

After a short while something really odd happened. A warning message saying that no synchronization of contacts was not possible, since the phone was not connected. What! The phone was connected. Not good! I discovered a button labeled “Restore”. The help text explained that this should be used when in trouble. I felt like being in trouble and hit the restore button. The process was really fast, but enough time to get some tea. Then the same procedure once more. Still the same problem. Next try; search the Internet. I found out that I was not alone with this problem. I tried out all of the different solutions explained in the forum. Do not know how many times I cleared the sync settings, restarted the phone, re-booted the computer and all other kinds of tricks. The Apple support pages did not contain any other suggestions than found in the forum. Now I was really frustrated. I should not be that complicated to get your contacts into your new phone. Time to get a break. Get your brain to think in a different way. Reset the brain. Get some food.

Several hours later it was time to try a new approach. I googled “Sync iPhone Google”. I found an article called “Google Sync: Set Up Your Apple Device for Google Sync“. It seemed to be exactly what I was looking for. The article explained how to use add an account in the iPhone to synchronize mail, contacts and calendar. In fact it was a Microsoft Exchange account. It took me less than 5 minutes to have mail, contacts and the calendar in sync.

The final solution was much simpler than copying contacts via the computer. The intent was to only transfer the contacts and then set up a mail account on the phone. Now I get an all-in-one solution,  with mail, contacts and calendar synchronization. Now my life is joyful again.

Oct 4, 2010

The Öredev Developer Conference 2010

Each year since its inception, I have attended the Öredev developer conference. What makes it special is that it is a combined conference, with not only Java. This year it is especially important for me, since I can learn about both Android and iPhone development.

I will attend this year. Will you?

Do not forget to register at the Öredev site.

Sep 26, 2010

Microlog4Android V1.0 Released

Finally! After many months of struggling with Microlog4Android, the first official release is here. The core is the original Microlog code, but it is re-written to take full use Java SE features. For example, no Vectors are used. This means that the logging is faster than it was in Java ME. One important addition is the support for SLF4J. This should make it easy to migrate if necessary. This might also be an addition to future versions of Microlog. This way it should be easy to share code between Java ME and Android. The most important appenders are there as well.

Please download it from here. As always, any feedback and comments are welcome.

Sep 24, 2010

My World is Changing; Android & iPhone Development

I have been working with Android development for quite a while now. So I decided to do something quite different; iPhone development.

It is like groping in the dark. After many years with Java and garbage collection, it feels a little awkward to manage your memory by yourself. The first encounter with garbage collection in Java was really awesome. After a couple of years as a Java developer, you realized that the garbage collector is not the answer to all of your memory problems. But still, you are not forced to think about memory management on a daily basis. All of a sudden you need to think about memory management on a more regular basis.

Eclipse has been the main tool in my Java toolbox for many years now. One might argue that IntelliJ or NetBeans is a better tool, but I have used Eclipse. Switching from Eclipse to Xcode is not easy. I miss the fabulous re-factoring support in Eclipse. There are many other small issues, but I am slowly and constantly learning new keyboard shortcuts in Xcode. I guess that I could be a better Xcoder in a while. The interface builder in Xcode is an invaluable tool. It is very easy to get nice looking iPhone UIs. It is very nice not having to worry about getting your XML files right. Making a good looking UI on Android can be frustrating and cumbersome. Of course you can get nice UIs on Android, but I find it easier to create one on iPhone.

Last but not least, the markets are a little bit different. Android market is open. Appstore is a little bit more closed. There is no quality control when submitting to Android market. As a consequence there is a lot of really bad Android applications. It is hard to find what you are searching for on Android market. The search seems to be case sensitive. Not good. Usually I install applications that is recommended somewhere. In most cases, there is a QR code available that I scan with the Android barcode reader. Really nifty application that makes installation on Android simple. The applications in Appstore is controlled by Apple and it seems that they are of better quality than Android applications. It must be pointed out that there are many high quality Android applications, but they are harder to find. As a developer in Sweden it is not possible to get paid for your Android applications, but you can get paid for your iPhone apps. When will this problem be solved?

Another thing that seems to be missing in the iPhone world is open source projects. The Java and Android world is full of open source. However there seems to be good hope for the iPhone world. I found this this list of open source applications for iPhone.

Of course there are many more differences. These are the most apparent differences from my perspective. At the end of the day I am a mobile software developer. Switching to iPhone gives me a new perspective of my world. I think that is a good thing.

May 18, 2010

Microlog, Microlog4Android, SLF4J and Other Stuff

Right now a lot of interesting stuff are happening with both Microlog and Microlog4Android. I have been working with Microlog, while Jarle has been working on Microlog4Android (M4A). Let me start with Microlog.

I have started the work on Microlog V3.0. To start with I have been able to make the structure a little bit easier. For example I have removed some abstract classes, like the AbstractFileAppender. This was created for one purpose only, to be able to re-use parts of the file logging functionality between Microlog and M4A. A nice side effect of this construction is that we save some memory, both in terms of the memory footprint and run-time memory. Some bugs are fixed and there are some minor improvements.

Even though Microlog is my long time sweetheart, the most exciting things are happening within the M4A project. When moving the code from the Microlog to M4A, we skipped a lot of code. It was the core and only a few appenders that was moved. Working with the Android platform is quite different and the needs are a little bit different. For example, there is not really a need to log to the console. Instead we have replaced the ConsoleAppender with the LogCatAppender. This works as the built in log classes.  The reason for moving Microlog to the Android platform was to give Android developers more opportunities than the built in logging facilities. It is possible to use the logcat tool to log to file, but in that case you do not get the normal logcat logging in Eclipse. I have never heard of any way of logging to a file on the SD card or on the device, using the Android logging classes. With M4A it is possible to log to a file on the SD card. Just use the modified FileAppender to use M4A to log to file.

Another unique feature in Microlog is the ability to log to a remote server. This far we have the HTTPAppender which logs to a HTTP server. This is definitely something that is not possible with the standard Android logging facility. The HTTP logging is ideal for field test or similar, where the developers need to collect logs in a central place. No need to transfer log file manually. Note that during the development it might be more natural for individual developers to log to a file.

But that is not all! The most prominent addition is the SLF4J implementation. This gives you as a developer freedom to change the logging implementation whenever you feel like it. For example you want to use M4A during development for file logging, but want to remove the logging at deployment time. Just replace the M4A jar with the NOP implementation. If you want to revert back to good old Android LogCat logging (only), use the SLF4J implementation.

Last but not least, we have changed the PropertyConfigurator to read configuration files from an Android resource directory. The file could either be stored in the /res/raw or the /assets directory.

For those of you that want to try it out, it is available for download at:
Please note that the latest release (V0.5) is rather experimental, but please try it out and give us feedback.

Apr 21, 2010

Printing Stuff from an Android Phone with PrinterShare

I am always on the quest to find nice applications for my Android. There are many not so good applications on Android Market. Many applications are fun for a couple of minutes, but not so useful. Today I found one that I liked. I do not know if I am going to use it a lot, but the concept is simple and yet powerful. I am talking about the PrinterShare application.

The PrinterShare application is used for printing from your Android application. You could print pictures, web pages, contacts, messages, e-mail etc. The PrinterShare is also added to the share menu. You have two ways of printing your stuff; either using a nearby printer or a remote printer. A nearby printer is found using a Wifi access point. To use a remote printer you need to install an application on your computer and register yourself at the PrinterShare site. Let us see when I tried out the PrinterShare application.

Finding a nearby printer was very easy. My printer is connected to my Synology DS110j, which has support for the Bonjour protocol. Since the PrinterShare application also has support for Bonjour they found each other without any problems. But I was a little bit disappointed when I only could print a test page. To print using a nearby printer you need to buy the pro version of the PrinterShare application. The real bad thing here is that paid applications are not available here in Sweden. But luckily enough it seems to be possible to buy a key using PayPal. The price is a modest $4.95.

 Finding my nearby printer.

Since I was curious to try the application out, I registered myself at the site. The registration was simple.  I entered my e-mail address, in return I got an e-mail with an auto generated user id and password. A couple of minutes later I had managed to print a picture that I found in my phone. Nice and easy.

Beside the setback with using the local printer solution, I was impressed how simple it was to get everything to work. It is applications like this that is nice to have installed on your Android phone. I am not sure if this is an application that I am going to use very often, but it is rather cool. If I am going to use it extensively, I think it would be worth paying $4.95.

What is your favorite Android application? Is there any Android application that you could not live without?

Apr 5, 2010

Back at Home :)

I am finally back from my vacation. Last week I was in Egypt with my family. They had an Internet café in the reception of the hotel, but it did only work one time when I was there. But I guess it was a good thing to be offline for some time. At least my family appreciated it. Prior to that I was in Istanbul. Turkey. This was a trip together with my colleges. I learned a lot of new things about the Android platform. The most interesting thing was to try out Robotium, a testing framework for Android. It is built around the existing testing framework found in Android, but it is much simpler. If you want to know more, please read this excellent article.

That is all for now. Nice to be back at home with a working Internet connection.

Mar 31, 2010

Using Amazon S3 to Backup a Synology DS110j

One of the most important features of the Synology DS110j NAS is the ability to let it backup itself. This is actually a new feature in V2.3 of the DiskStation firmware. Here are my first impressions.

There is one thing that bothers me; I could not re-use the old bucket which I created when using Jungle Disk, at least what I could tell. When selecting the existing bucket, the DiskStation does not accept it because of its long name. The Jungle disk also create strange directory layout with something that looks like a tree map. This is in normal cases hidden for the user; Jungle fixes to mount a network drive. The network drive looks like a normal directory. If I remember correctly this is something that I choose when creating the original bucket. Anyway I tested and created a new bucket. I then started a backup of a small amount of files to test how it worked. The DiskStation solution is much simpler than Jungle Disk. One must consider that the Amazon backup is a new feature of the DiskStation software, while Jungle Disk has been around for some time. For example, you could not encrypt the content. You only use an encrypted transfer, that is you are using HTTPS for transfer. Another thing that I miss is a progress bar or similar. As it is today, you only see the status like this:

You only see that the backup is going on. I think I need to file a feature request for this. Notice that the backup type is a little bit misleading. As I wrote earlier, it is only the transfer that is encrypted.

The Jungle Disk desktop software discovered the new bucket I created for the DiskStation server. It informed that the bucket was compatible with Jungle Disk. I was then a little bit surprised when I could not discover any contents there. I decided to get another Amazon S3 browser. I found the CyberDuck browser. It has the capability to browse an Amazon S3 account, as well as a multitude of other servers like FTP, Samba etc. The FTP browser could come in handy when connecting to the FTP server in the DiskStation. (I have actually tested it and it worked without any problems.)

The CyberDuck browser works like a charm with all my buckets, see the picture below.

This shows the directory layout for the DiskStation and the Jungle Disk buckets. Notice that I have decided to do no incremental backup for the DiskStation. This is because I want to be able to restore my data, even if my DiskStation crashes. I could for example use CyberDuck to download all the contents. I think that I have configured the Jungle Disk to do incremental backup, which is like comparing apples and oranges. Anyway I am a little bit wiser than I was when I configured Jungle Disk from the beginning. The drawback is of course that is uses a lot of more space on the server. My opinion is that it is worth the extra money to be on the safe side. It is going to take some days to backup the pictures once more, but the DiskStation could work 24/7 without affecting my daily work. Meantime I could test other DiskStation features. There seems not to be any performance problems using other services while doing the backup. But this is only a problem during the first backup. When the initial backup has been performed, I have scheduled the backup to run at night every.

All in all I think that this solution is much better than the old one. All of our media files are now stored on the DiskStation and the backup is done as described. On my Mac I run TimeMachine to backup my laptop automatically. I will be able to go to sleep each night being sure that the things that I have added are safely transferred to my Amazon S3 account.

Mar 26, 2010

Configuring and Installing my Synology DS110j NAS

It is now time to configure and install my brand new Synology DS110j NAS. It comes with a CD containing V2.2 of the firmware and the desktop application. However I want to use V2.3, since it has support for backing up to a remote Amazon S3 account. I had downloaded and installed V2.3 of the desktop application. The desktop applications are the Synology Assistant and DownloadRedirector. The Assistant is used to find all the Synology servers on your network. When it does, you could use it to configure it and do basic setup. It also contains a resource monitor and a photo uploader. The DownloadRedirector is used for setting up downloads that downloads files directly to the DiskStation. Thus it is time to start the Synology Assistant.

When starting the Assistant you see a list of servers. In my case there was only one. The status of it was "not configured". By double clicking on it, I started a configuration wizard. I used the standard settings on every page. However at the last page I was given the choice to select which firmware version. Good! I selected the downloaded zip file. It complained that it was not the correct file extension. It should a .pat file, that is a firmware patch file. OK, I double clicked on the zip file to extract it. No .pat file inside it. I downloaded another version to figure out if it contained a .pat file. But no success. I decided to go with the supplied version while I was to figure out what the problem was. Some people have complained about the long configuration time, especially the formatting have been causing problems for a lot of people. When I started the configuration I was told it could take about 10 minutes. It looked like this:

Meanwhile I was trying to figure out what the problem with my downloaded firmware was. Could not figure it out. Time to test the support. I filled in a rather basic issue report on-line. After that I continued to find the problem. Unzipped the original zip file once more. What? I saw the status information flashing by "Unzipping .pat file" or something. Does it really contain a .pat file? Strange! Is it a hidden file. No, it would not be possible to select the file by default then. Maybe it was the archive utility that is causing me all the trouble? I decided to use the command line zip. An there it was. The zip contained a .pat file. However the .pat file seems to be a zipped file as well. When it is discovered by the archive utility, it unzips this as well. In normal cases this is a good thing. If you extract a .tar.gz file, this is what you want.

For those of you that encounter the same problem, let me explain in more detail. You start the Terminal. You find it in the Utilities folder in the Application folder. The type the following (assuming that you downloaded it to the Downloads folder)

cd Downloads


As a result you have a file called synology_88f6281_110j_1141.pat in your Downloads folder. Very easy if you know how to do it.

Now let me continue with the story. The next thing I decided to do was to setup the printer. The printer has always been connected to my wife's computer. The reason for this is that she simply print more documents than me and my laptop is not always at home. However I find it annoying to start her computer to print a single document. This day I for once really needed to print a document.

When the device was configured, I connected to it. This was done by double clicking it or select it and select connect. A web page was prompting me to login.

After login I found the "USB Printer" in the folder "External Devices".  I plugged in the printer and it was automatically discovered. This is how it looked like:

I Googled the Internet and found the same description on the Synology wiki. However it contained more information for newer Mac OS versions. I started the configuration, but halted when I was prompted to enter an IP address. Not good! I should enter a name and the IP should be resolved automatically. After a little bit of research I found something called Bonjour. It is a service discovery protocol. Actually it is Apple's implementation of Zeroconf. I found the configuration for this in Home->File Sharing->Win/Mac OS. I enabled the Bonjour broadcast as seen here below.

From now on it was simple. Just opened the "System Preferences", clicked the "Print & Fax" button.  Pressed the + button below the printer list. The printer was automatically discovered. Very easy setup.

This is all for now. Look out for my adventures with my DiskStation.

Mar 22, 2010

Unboxing Synology DS110j NAS

Yesterday I came home from Istanbul, Turkey. My NAS that I ordered a couple of weeks arrived at the DHL pickup place the day I left for Turkey. As you might have guessed I have been eagerly awaiting to pick it up. Today I got some spare time to pick it up. I have always loved to unbox new technical stuff. This time I thought it was a good idea to do an unboxing article. Please enjoy!

The first look at the stuff I received:

To the right you see the Synology DS110j NAS. There is no harddrive included, so you must install one yourself. To the left you see the harddrive that I ordered. It is a Western Digital Green 640 MB. All Synology products are energy conscious. Thus I hope that I get a solution the is environmental friendly and does not give me big electrical bills.

Here you see the label on the side listing the main features of the DS110j. The specifications are there as well. For more information please download this document.

The box contains two smaller boxes and the power cord. The box to the left contains the NAS device and the box to the right some additional cables and the power supply. Let us take a look a closer look at the box to the left.

Now we are getting somewhere.  The NAS is about the be unveiled. But there seems that I need to read something first. Synology have saved some trees by printing a very small booklet. It references to the documentation stored on the enclosed CD. So you need a computer to set things up, but that should be no surprise since it is intended to be used with your computer.

This is how the front looks like

A fairly regular set of LEDs and buttons. From top to bottom, the LEDs are; STATUS, LAN, DISK. Below the DISK LED there is an unnamed LED. Maybe a LED showing the USB status. Me do not know. Then comes a USB connector. Here you could connect a harddrive or a printer. If you connect a harddrive or a memory stick here it is possible use the USBCopy functionality. By pressing the button below it, the content is copied to the diskstation. At the very bottom you find the On/Off button and the corresponding LED. This turns into blue when the device is on. It blinks when the device is starting up.

The back looks like this


At the top you see the fan. Below it is two USB connections. These could be used the same way as the front connection. Next up is the Ethernet connection. There is also a reset button and a power connector. To the right you have a keylock hole.

Now it is time to take a look inside. Just put it on the table and push the lid forward. Very easy. I still remember those old PC cases where it very long time to get open the case.


To install the harddrive you just push the it into the slide. When it is in place, you use the screws that are included to fasten the harddrive.



 On with the lid again. Two more screws to fasten the lid and the NAS is ready to use. Connect the power supply and the network cable to the switch. It only took me a couple of minutes to install the harddrive and connect the NAS. I think that even a non-technical person could fix this very fast.

Now everything is in place. The NAS is situated near the office switch. The switch is a D-link Green DGS-1005D, which also is environmental friendly. It switches of the power when a connection is idle for a long time and adjust itself to the length of the connected cables. The supplied network cable was to long, so I replaced it with the shorter one (0,5m).

Now it is ready to be used, nearly anyway. It needs to be configured. But that is another story. The unboxing took me less than 20 minutes, although I did it very slowly and spent some time taking photos. As mentioned it is easy to install your own harddrive and easy to get started. So far so good anyway.

Mar 9, 2010

My NAS Research (D-link, Netgear and Synology)

Last week I decided to buy a NAS. Every time I buy some new geek stuff, I spend many hours on research to select the optimal model. I usually start out with a price comparison site like pricerunner or prisjakt (in Swedish only). Of course I set up some requirements as well. My initial requirements was:
  1. Support for Windows & Mac OS (Samba or an equivalent solution)
  2. Secure FTP server
  3. Reasonable amount of harddisk space
Nothing special, but at least I need to know what I am looking for. After a couple of hours of hard research I had these alternatives left:
  1. Netgear Stora Home Network Storage MS2110
  2. D-Link DNS-313 or D-Link DNS-323
  3. Synology Diskstation DS110J or DS210J NAS server
Of course there are a lot of other brands, but these are the ones that I feelt was right for me. Both Netgear and D-Link are brands that I knew before, but I have never heard of Synology. The Netgear model is the most affordable. It comes with a pre-installed harddisk which is 1 TB big. It also has the option of adding an additional harddisk. The D-Link and Synology NAS servers requires you to install one or two harddisks yourself. I prefer to select the harddisk myself. The Netgear looks a little bit bulky.  The D-Link and both have support for Samba and FTP.  The feature that sticks out on the Synology NAS is the Amazon S3 support. Today I make backup using an S3 account. It would be even more convenient if the NAS could do that for me. Then I could schedule backups when I am not at home and have my computer with me. Last but not least, Synology have forums with many active users.

When I receive the NAS I want to change the network a little bit as well. This is how it looks today:

Our family photos are saved on either of the external harddisks. I sync these harddisks with the excellent Beyond Compare tool. When in sync, I use Jungle Disk to do the backup. Even though Beyond Compare is my favorite tool, I find it a little bit boring to synchronize these harddisks.  This is where the new solution should be a little bit smoother.

This is how it should look like after the upgrade.

Now we could store all of our photos on the Synology NAS server. The NAS server do backup at scheduled times. No need to synchronize the harddisk anymore. Each the computers should backup other valuable content as well, to the NAS. There is still room for improvement. For example, I plan to have wired connections to all computers in the household. The "old" switch that is used for the home entertainment should be replaced by a faster one.

But what model did I choose? You might have guessed it; I ordered the Synology DS110J together with a Western Digital Green 640 GB harddisk. This way I have an environmental friendly solution. All Synology products are designed to be environmental friendly. Notice that I use a D-Link Green DGS-1005D switch in the new network solution. 

That is all for now. I am really eager to get started with my new NAS. The original ETA was tomorrow, but the time seems to have slipped. I hope that it will arrive before the weekend.

Feb 23, 2010

Starting the 3.0 development of Microlog?

What should it take to upgrade Microlog to 3.0? A while ago I release V2.3.4 of Microlog. There was only one "small" change, the Microlog instrument and instrument example modules were activated once more. They had not been part of any previous 2.X release. But was it a good decision to make only a patch release for such change? I must admit that I think this was an incorrect decision. The change was big enough to make a minor release or even a major release. What should I do to solve the situation right now?

I think the best solution would be to make a major release, that is that I should make a Microlog V3.0. This makes it possible to make some other changes that I wants to do. The changes that first comes to mind are these:
  • Add the logging level NONE.
  • Remove the Android module
  • Remove the Amozon S3 module
  • Cleanup the PropertyConfigurator
  • ?
Some explanation might be in place. Adding the Level NONE gives the possibility to stop the logging without removing the logging from the code. This is sometimes used for production builds, with the possibility to enable logging after distribution. The feature has been asked for many times. To be honest I do not remember why this has not been implemented before.

Removing the Android module is for obvious reasons, since we have forked the Android parts this is no longer necessary to keep in the Microlog project. But is might not be obvious why to remove the Amazon S3 module. For me this was an experiment, but not something that turned out well. It is big and clumsy. A 3rd party library is used for the SOAP communication which also adds to the size. Fortunately it is kept in a separate jar. Thus only developers using the Amazon S3 modules are affected by this. Another drawback is that we need to maintain this module. I feel that I rather spend time on maintaining the core and other modules that are more important. But this is something that certainly is worth discussing. What do you think?

Cleaning up the PropertyConfigurator is something that I have wanted for a while now. When adding the hierarchical logging support in V2, I kept the original configuration possibilities. This is not wrong. However there are parts in the code that could be shared between the Microlog classic configuration and the new configuration. The parsing of attributes could be replaced by a separate StringTokenizer. All in all I think that the size of the configuration classes could be slimlined.

I guess that there are some more parts that could be improved and/or added to the list for Microlog V3. This was all that I could think of right now. Do you have any suggestions?

    Feb 21, 2010

    Reading Properties Files on Android (Investigation for microlog4android)

    Today I have investigated the possibilities to read properties files on Android. In microlog4android there is no configuration possibilities via file as it is today. The microlog4android development team feels that we need to re-write the configuration from scratch. There are many reasons for this decision, for example there is the Properties class available on Android while in Java ME we use microproperties. These are my findings about properties files on Android.

    My first thought was to use the SharedPreferences class. My brain picked up this idea while using the SharedPreferences in another project. The SharedPreferences is used to read/write preferences for an Android application. The preferences are stored in an xml file that is put in the private directory structure of an application. You could edit the file by pulling it from the device or emulator using adb pull. After editing it you push it back using the adb push command. If you are using the Android Eclipse plugin you could use the file browser to do the same trick. But this is cumbersome to pull/edit/push the file and I do not really like to use XML for storing properties.

    But how do we do it in microlog? The properties are stored in a properties file that is put in the JAR or by setting properties in the JAD file. Both are easily edited in Eclipse without no need to pull/push the file. The same goes for NetBeans. The properties file is bundled with the jar. Why not do the same on Android? Time to Google again! I found out that there was primarily two ways of doing this:
    1. Using the AssetManager
    2. Reading a raw resource
    To test this I created a simple Android project in Eclipse and copied two microlog properties files. One was put in the /assets directory while the other was put in the /res/raw directory. The first thing that happened was that Eclipse complained about the naming of that was put in the /res/raw directory. It was kind enough to inform me that only [0..9][a..z] was allowed. Changing the capital V to a lower case v was simple. Now time to do some coding. The code for reading using the AssetManager looked like this:

    Resources resources = this.getResources();
    AssetManager assetManager = resources.getAssets();
    // Read from the /assets directory
    try {
        InputStream inputStream ="");
        Properties properties = new Properties();
        System.out.println("The properties are now loaded");
        System.out.println("properties: " + properties);
    } catch (IOException e) {
        System.err.println("Failed to open microlog property file");

    The code is dead simple and thank God for the Properties class, although microproperties would do the trick. The second way to do it is as simple as the first approach. The code looks like this:

    // Read from the /res/raw directory
    try {
        InputStream rawResource = resources.openRawResource(R.raw.micrologv2);
        Properties properties = new Properties();
        System.out.println("The properties are now loaded");
        System.out.println("properties: " + properties);
    } catch (NotFoundException e) {
        System.err.println("Did not find raw resource: "+e);
    } catch (IOException e) {
        System.err.println("Failed to open microlog property file");

    Notice that I omitted the code for getting the resources, since this was part of the first example. Both ways seems to be good. But what to choose? Using approach 1) has the following advantages from a microlog4android perspective:
    1. You are not limited to name your file with lower case letters only.
    2. It is possible to use a default file name if the user does not specify one.
    But I probably will implement both solutions, since this gives the user a freedom to choose where to put the file. What do you think? Is there any other solutions that I have missed out?

    Feb 14, 2010

    Ego Surfing - Microlog4Eclipse

    Sometime I do some ego-surfing. I check if there is anything written about Microlog or any other open source projects that I am involved in. One funny thing is that I found the podcast that I and Darius Katz was part of as a ringtone. Maybe it is a bestseller? :) But I just found a more serious project, namely the microlog4eclipse project. It is an Eclipse plugin to simplify the usage of Microlog. Or as it say on the project page "Plugin for j2me microlog logger based on android adt plugin". I wonder if they are using the Android part of the Microlog library. There is no download available at the moment. I will investigate this further. Curious to check it out. It is interesting that someone is actually writing a dedicates Eclipse plugin for Microlog.

    Feb 13, 2010

    Some Words about Microlog and Some Maven Stuff

    As you faithful readers know, I am a believer in “Release early, release often”. Therefore I have tried to streamline the release procedure as much as possible. It must not feel like a burden to do a release. In fact it should be as fun as watching a comedy film, almost at least. But this time it was not very fun.

    About two weeks ago, one of my fellow Microlog developers committed some changes. It was time to make a new release! I always start out with the deployment of Maven artifacts to the Sonatype repository. Ever since I changed from using the website at SourceForge as a repository, to the Sonatype repository this is a process that works like a charm. It usually takes a little more than two minutes to do this. After that point the artifacts are available on a Nexus staging repository. If something is wrong, you can drop the release. If  the release seems to be ok, you have to close and promote the artifacts. When you close the staging release, a validation takes place. It checks if everything is ok.

    The validation process has always worked for me since I first got it right. But not this time! It was complaining about missing information in all the module, information that is available in the parent module. As a coincidence, or not, a discussion about this was started on the Sonatype forums. As it turned out, this was a bug in a new validation scheme. There was a workaround, but I did not feel that it was worth it. From my experiences I could conclude that the folks at Sonatype usually fix this in a day or two. This time it took some days more.

    When the validation bug was fixed it was time to give it a try again. This time the validation was complaining about a file not being signed (site.xml). A file that should not be signed, at least not according to me. This was part of the parent module. To get rid of this problem I decided to make a new sub-module for the site generation and try if I could sign the file this way. After a while I was able to sign the file. However now the site was not generated correctly. Not good! I moved the site.xml file back to the parent module, but kept the module since I thought the site generation was good anyway. The site generation worked again, but the file was not signed. Time to google. Now I found a solution that should work; disable the deployment of artifacts from the parent module. It worked, almost anyway. Now none of the artifacts was deployed. More googling. Aha, it is possible to disable the inheritance of some build plugins. Now it worked the way I wanted. Nice! The process of finding out how this should be done took me “only” about 4 hours. But there was one bonus; I could now disable artifacts modules that was not very useful to deploy. For those who wonders how to disable artifacts deployment, take a look at this helpful article.

    Finally I managed to make the release and deploy the artifacts to the Sonatype repository. The artifacts are there in Maven central as well. For those of you who are interested, the version is now 2.3.5. Enjoy!

    Feb 4, 2010

    R.I.P. Kenai - Microlog4Android has been Moved

    I admit it. I was wrong. It is not business as usual for microlog4android. Yesterday I got an e-mail explaining that Kenai will be shutdown. But fortunately there are other hosting options. I wanted to move the project as fast as possible and have a simple solution. I was recommended by my esteemed colleague Hugo about using Google Code together with GitHub. Google Code allows a project to use other service, such as GitHub, when Google Code  is not sufficient enough. Moving the source code was rather simple. The part that took time was to generate a new SSH key for my new computer. The nice thing with git is that all the history is preserved.

    To move the source code from the Kenai git repository I did the following:

    git pull git://

    git remote add

    git push origin master

    It was as simple as that. The first line was needed to pull the source code out from Kenai, since I had no backup copy on my new computer. So for a user with the code on his computer, it would be even simpler.

    Right now it is time for me to relax a little. I know I have a lot of things to do tomorrow at work. Please stay tuned for more updates about microlog4android.

    The project:

    The source code:

    Jan 28, 2010

    The Future of Kenai?

    As I wrote in an earlier article, the Android part of Microlog has been forked into a new project. The project is hosted on Project Kenai beta. This is Suns offering for open source project. With Oracles acquisition of Sun this project seems to be endangered.  But I am not sure how it will be affected. If it is going to be changed, I sure hope that the projects will get support for the migration if any. For more information, read the article about the subject on Kenai's blog.

    I guess we have to continue with business as usual for microlog4android.

    Jan 17, 2010

    The Beauty and the Beast - Apple MacBook and Dell XPS M1710

    A couple of years ago I bought a Dell XPS M1710. At the time this was one of the fastest laptop computers around. It is primarily designed as a gaming/multimedia laptop. The design is made to appeal gamers (see picture). As a developer I need a high performance computer. So this was a perfect choice for me, at least this was what I thought. As it turned out it, it has not been as good as one could imagine. There have been many problems with it during the years. The biggest problem are the fans. When running Windows Vista they are quiet at first. The first time you use Maven, or similar, the fans are up to full speed within seconds. I have tried several fan control applications but these did not solve the problem. The problem only manifested itself in other ways. I also tried to use Linux. With Linux things are much better. Linux does not seem to strain the CPU so much as Windows Vista. Fortunately Dell support gave me two new fans. Another problem is the battery. At full capacity they last for about 2 hours during normal work. after about 18 months a needed to buy a new one. This is not really acceptable. Speaking of power; I had to replace the power supply as well. As with the fans this was replaced by Dell. Thank God I had paid extra to get full support. Or I should thank my colleague Darius who convinced me to pay some extra bucks to get the full support.

    Before Christmas it was time to buy a new laptop. I was a little tired of the bad quality of my Dell computers, so I decided to try something new. I decided to buy me an Apple MacBook Pro instead.  By selecting a 15” screen I was able to fit in a Solid State Drive (SSD) within budget. Although not so big, it is only 128 Gb. But since I use an external hard-disk for data, this is not so much a problem.

    Now here I am sitting by my brand new MacBook and writing this article. I have about 7-8 hours battery capacity. According to the specification I have only 7 hours capacity, but I think the SSD gives me some extra time. The SSD is supposed to use less power than a normal drive. Of course this would degrade over time, but it is in par with my Asus netbook computer. So for I am very satisfied with my new computer. It was not so hard to get used to as I initially anticipated. Many people have told me about their frustration when switching to an Apple computer. I am familiar with Unix and Linux and many things could be achieved in a bash shell. In my daily work I use VirtualBox to run Ubuntu Linux. My fear was that the performance would be to bad and get annoyingly slow. But so far I am not frustrated by slow speed. In fact I was surprised. I have been using VirtualBox on my old XPS, but it was to slow to be really joyful. What more is there there to say? The MacBook works like a charm and is really fun to use. It is fun to compute! ;-)

    The Beauty
    The Beast

    The Beauty and the beast