Wednesday, December 17, 2008

Control versioning in new maven-release-plugin

I have been waiting with much anticipation for the new version of the maven-release-plugin to be, well released. Our team uses this time saving plugin heavily but since moving to a MAJOR.MINOR.PATCH version strategy we quickly found out that it needed some new features. What we really needed was the ability to specify versions for the tag and for the new trunk. For example, when HEAD is at 2.1.0-SNAPSHOT, while in batch-mode the new default version would be 2.1.1-SNAPSHOT, but I want it to be 2.2.0-SNAPSHOT.

Fortunately, someone else was thinking the same thing and the new capabilities where already in the works. Our current version we are using is 2.0-beta-7. But I noticed today the maven team finally released 2.0-beta-8 (see repo1). Here are the release notes for the new version.

What I was specifically interested in was the new batch-mode features. Now instead of having to manually update the POMs after creating a release I can do the following automatically in CI:

mvn release:prepare -DdevelopmentVersion=2.2.0-SNAPSHOT

I haven't tested it yet, but will know in the near future if it works.

Friday, December 5, 2008

Maven Profiles: Something you need to know

An issue came up today in our CI environment and I learned something new about maven profiles. Something that everyone who uses maven should know. Perhaps, like your environment, our project uses maven2 to build and we take advantage of multiple profiles (see our example here). A common use for profiles is to define new modules and properties. For example:

<profiles>
<profile>
<id>integration-tests</id>
<modules>
<module>integration</module>
</modules>
<properties>
<app.port>8080</app.port>
</properties>
</profile>
<profiles>

This profile is activated when you run mvn -P integration-tests install. But sometimes you want a profile on all the time. Maven profiles have the ability to be actived by default. To do this just do the following:
<profiles>
<profile>
<id>integration-tests</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<modules>
<module>integration</module>
</modules>
<properties>
<app.port>8080</app.port>
</properties>
</profile>
<profiles>

Then you can just run mvn install and that module and property will already be active.

activeByDefault.....Sometimes
Turns out that perhaps maven should have renamed it to activeByDefaultSometimes. Reading their documentation I learned something important about maven profiles:
"This profile will automatically be active for all builds unless another profile in the same pom is activated using one of the previously described methods. All profiles that are active by default are automatically deactivated when a profile in the pom is activated on the command line or through its activation config."
So in my example, if I have more than one profile defined in my pom and I run that new profile, then my integration-tests profile is deactivated. For example, when I run mvn -P SecondProfile, the module and property defined in the integration-tests profiles are not available to my build. One trick to see which profiles are activated when running maven is to run mvn help:active-profiles.

In general I have never been a huge fan of profiles. They feel clumsy, clutter up the cmd line, and are not well known among your team. As their documentation states, "adding profiles to your build has the potential to break portability for your project". Instead of using Profiles, attach to known phases. For example, see my second example from this previous post. In this example, I use the maven-antrun-plugin to attach to the install phase. Now this will run anytime I run mvn install. No clumsy profile I have to memorize and run. Given that, I am not saying you should avoid Profiles all together. Just use them as a last resort.

Wednesday, November 5, 2008

Spare cycles to write some posts

Starting this week I am going to spend the next couple of weeks in Panama City, Florida on business working with the 1st AF here at Tyndall Air Force Base. In my down time, like right now while I copy over hundreds of MBs over a slow VPN connection, I plan on catching up on some blog ideas I have been wanting to write.

Here is my goal:

  • Discuss and promote our iPhone app BorrowMe
  • Multi-part series on security. For the past two sprints I have written about 6 lines of code, but I have learned a ton about securing our web based application using LDAP+Active Directory and SmartCard authentication or CAC (Common Access Card) authentication as referred by in the DoD. (U.S. Department of Defense), Apache, SSL, and OpenSSO.
  • Talk some more about Ubuntu JeOS

Wednesday, October 22, 2008

More tips on using the maven2 release plugin

This blog's most frequently visited post was the one I did over a year ago titled "How to create a release using the maven2 release plugin". Automating this portion of our frequent release process without a doubt has saved my team hundreds of hours over the past year. For that reason I would like to provide some new tips I discovered yesterday when I needed to change the subversion comment used by the release plugin when committing any changes.

For teams not using the maven-release-plugin yet, this handy plugin helps automate the complete process when releasing your software. It helps save time by doing the following and more:

  • First it will build your project running any tests you specify
  • If successful, it will then commit your project into a tag
  • Update the tag pom versions from say 1.0.0-SNAPSHOT to 1.0.0
  • Change the tag pom SCM URL to correctly point to the tag instead of HEAD
  • Most importantly, it will build the release (1.0.0) and upload the artifacts to your companies maven2 repository (archiva now in our case; was artifactory).
  • Finally, it will increment the pom versions in HEAD to be the next release (1.0.1-SNAPSHOT).

Tip #1 - Change Commmit Comment
By default the maven-release-plugin uses a comment like, [maven-release-plugin] prepare release project-1.0.0, when doing any type of commit such as creating a tag/branch or incrementing the pom versions. Which works wonderful until your team implements a Subversion pre-commit hook requiring all commits to start with certain keywords (Story: ZZZ, Jira: ZZ-N). Luckily the maven-release-plugin has an option to override the comment.

There are two simple ways to provide the plugin with the comment. The first sets the system property, scmCommentPrefix, to the predefined prefix. The second, provides the prefix comment in the pom.

mvn release:clean release:prepare -DscmCommentPrefix=Jira: AC-100 [maven-release-plugin]

<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<configuration>
<scmCommentPrefix>Jira: AC-100 [maven-release-plugin]</scmCommentPrefix>
</configuration>
</plugin>
<plugins>

Tip #2 - Configuration Options via -D
Disappointed that I didn't catch this earlier, especially since I have been using -DpreparationGoals for awhile now, but all of the optional parameters for the prepare goal can be defined with -D rather than in the POM (as above). So if you wanted to change the default tagBase to use /branches instead of the default /tags do the following. However, I would still put this static value in the pom.
mvn release:clean release:prepare -DtagBase=http://project.svn.org/svn/project/branches
Tip #3 - Allow SNAPSHOT dependencies
This new feature is going to be huge for legacy projects who weren't raised with the release plugin. As mentioned in my previous post, previous versions of the release plugin didn't allow you to have SNAPSHOT dependencies; rightfully so too since that wouldn't be a very good practice to have a release depend on a changing dependency. However, I think this feature prevented many projects from using the release plugin because their snapshot depenencies were too many to change and it was just easier to continue doing same old.

Even though I have not used this feature, it would seem now one could release a project using this plugin and still have SNAPSHOT dependencies. However, after creating the release it would still be a good idea to go back in the release and change any SNAPSHOT dependencies to releases.

I believe this is a new feature so you might also want to upgrade to the latest version.

Wednesday, October 15, 2008

Using Yahoo Pipes screen scraping to create an RSS Feed

The internet amazes me. Every time I use it, or write something for it, I continue to be blown away. Take for example, my favorite website for Husker information: http://www.huskerpedia.com (for those that don't know, what I mean by husker information is Nebraska Cornhusker Football. I was born in Nebraska so I naturally follow the cornhuskers.). Someone takes the time to consolidate all husker information and updates this site. It is the de facto standard for Husker news. Unfortunately, this site does not come with any type of notifications via an RSS feed.

Now this site does not necessarily amaze me, but rather how I can take a tool like Yahoo Pipes, scrape the latest news, and in a couple of hours subscribe to that feed in Google Reader. Not only that, but I cloned Paul Arterburn's Huskerpedia Pipe to get started and now everyone can take advantage of my new feed by either subscribing to it, making it better, or cloning it for their own use.

This would make the second time I have used Yahoo Pipes to accomplish something complex very easily. Several months ago, I created a pipe called the Gestalt Shared Feed, that combined all of my co-workers shared items from their Google Reader feed. This allowed us to read the best of the best among ourselves and has really helped our communication. I think we have about 10 people who share items and many more who subscribe to it.

Check out my RssHuskerPedia Pipe and view the source if you are curious. See below for the before and after.





Thursday, August 28, 2008

Me lovin Git

As I stated previously, I am sold on DVCS, and today it just got better.

I came across a situation where I needed to share highly experimental code with a teammate. There was no way I could check this code into our normal repo, so what first pops into your head: email (strongbad style). You've been here and don't pretend you haven't used email. Oh yes the wonderful land of emailing around jar or zip files repeatedly.

Emailing source code around stopped today. Instead I got on the phone with Ron Alleva who helped walk me through the simple git commands to create my own local master repo. It was so simple it was ridiculous.

Here are the steps to create and share hack code in minutes

  1. install git
  2. cd to the folder you wish to share
  3. run git init
  4. run git add .
  5. run git commit -a
  6. Have your teammate clone it by running git clone ssh://user@host/path/to/repo.git/
Ironically after my teammate first cloned it and tested the code it failed because of a recent change I made. So I made the fix, committed it to my local repo, and he was able to git pull those changes. Also, if he makes any changes we can share them easily as well by him using git push.

Wednesday, August 27, 2008

Testing the new firefox plugin Ubiquity

Today I found out about the new experimental firefox plugin Ubiquity and I wanted to give a try in my blog. So far I am very impressed and it seems to be working fine on linux, even though the documentation says they don't support it yet (I am running Ubuntu 8.04 with firefox 3).

It was easy to install. After installation firefox takes you to the firefox egg about:ubiquity which contains one of the best tutorials I have ever followed.

To try it out you can get a map of where I work. First install ubiquity and highlight this address:

407 Pennsylvania Ave Joplin, MO

then invoke ubiquity (CRTL+SPACEBAR) and type map. Click on the map and then click on Insert map in page.

Pretty slick. So far today I have used the language translation features, bold, tinyurl, and map. In a way it reminds me a lot of Gnome-Do for linux in that it provides a quick way to get information but obviously ubiquity is much more powerful.

Monday, August 18, 2008

Waiting for Free iPhone SDK

What is this a Starbucks waiting line? Am I trying to download a microsoft product? Sure seems like it. Tonight I had a goal of getting my first Hello World app working for an iphone, but apple had other plans. While waiting for my turn in line to download the free SDK I watched the following video "Hello World Final SDK".

Saturday, August 9, 2008

Getting started with Grails and Extjs

This article describes how to get started using extjs with your grails app. Since the plugin is deprecated because of the GPL license fiasco, I decided to write my own simple grails script to handle installing extjs for the team for 2 main reasons:
1) Prevents me from having to commit 500+ files into SVN for every new version
2) Makes it easier to upgrade to newer versions of extjs in the future

We have 2 existing internal applications that use extjs and every upgrade I am kicking myself for not using maven to extract the extjs zip file. So with our new grails app I wanted to not make the same mistake.

Install Extjs
1) Download your choice of extjs. Note as of version 2.1 extjs is under the GPL license. Meaning if your project isn't open sourced under the GPL or an internal company app then you need to use the 2.0.x versions (2.0.2 is the latest). In our case I downloaded version 2.2.

2) Copy ext-2.2.zip into your grails plugins directory

3) cd into your grails application

4) Add the zip file to svn (or whatever): svn add plugins/ext-2.2.zip

5) run grails create-script install-extjs

6) Add the script to svn: svn add scripts/InstallExtjs.groovy

7) Modify the InstallExtjs.groovy script and add the following GANT code.

8) run grails install-extjs

9) Exclude the unziped ext directory: svn propedit svn:ignores web-app/js Exclude the folder ext.

Test it out
Now that you have extjs installed you can copy one of their examples into the grails web-app directory and update the links.

1) Open up the ext-2.2.zip file again and extract the array-grid.html and array-grid.js files from the examples/grid folder to the grails web-app directory.

2) Modify the array-grid.html file. Update the relative links for css and javascript by replacing ../../ with js/ext/. For example, href="../../resources/css/ext-all.css", should now be href="js/ext/resources/css/ext-all.css"

3) Open up the array-grid.html file in your browser.

Now you have extjs installed with a simple example on how to use it. Next I would like to create a grails controller that returns JSON to populate a simple grid.

Sunday, July 20, 2008

Grails: Lessons Learned

Here are some lessons learned concerning lazy vs eager fetching and how to delete a child object in a One-to-many relationship (unfortunately it's not super obvious).

First Lesson: Don't set eager fetching globally
I by no means am a grails expert, but based on my experience don't set the eager fetching in your Domain as the example shows (unless you have a very good reason and understand the consequences). Lazy vs Eager fetching is well described in the grails documentation, so I won't repeat it, but anyone using One-to-many relationships needs to know the differences.

The default behavior in grails is lazy fetching, which results in n+1 queries. In some cases this might be ideal, in others it may not. When it's not you have a couple of choices. The example in the grails documentation sets a fetchMode property on the Domain. This sets it globally and every time the Domain is accessed, grails is going to load all it's many relationships. The path I recommend is to specify the fetch mode when retrieving the data. For example, the list() method has a parameter called fetch and can be used like this: Book.list(fetch: [authors: "eager"]). This gives you the most flexibility by not specifying the fetch mode globally, but allowing you to fetch eagerly when necessary.

Second Lesson: Use Hibernate Events to help remove associations
Like myself, you might actually have a One-to-many relationship where you need to delete a child. Unfortunately this use case isn't documented very well and it actually took me a little bit to figure out.

So lets say you have the following two domains

class Parent {
static hasMany = [kids: Kid]
String name
}

class Kid {
static belongsTo = [parent: Parent]
String name
}

And you save the following
new Parent(name: "James").addToKids(name: "Ayden").save()
Now Ayden turns 18 and going off to college and you need to remove him. You might think this would work:
Kid.findByName("Ayden").delete()
But it doesn't because the parent James still has a reference to the kid Ayden in the kids list (parent.kids). So you have to do the following:
def kid = Kid.findByName("Ayden")
kid.parent.removeFromKids(kid)
kid.delete()

Why that isn't the default behavior in grails I don't know, but to prevent you from repeating code everywhere you can use Hibernate events. In the Kid Domain add the following beforeDelete property:
class Kid {
static belongsTo = [parent: Parent]
def beforeDelete = {
parent.removeFromKids(this)
}
String name
}

And now when you want to remove a Kid, all you need to do is call kid.delete(). The hibernate events are interesting. By default grails supports 4 events: beforeInsert, beforeUpdate, beforeDelete, onLoad. However, there is a recent plugin called Hibernate Events Plugin that adds 7 more events: beforeLoad, afterLoad, beforeSave, afterSave, afterInsert, afterUpdate, afterDelete.

Friday, July 18, 2008

Grails JSON Parser

Here is a quick example on parsing JSON in grails using groovy (surprisingly, google isn't returning any good hits). Also, if you needed this ability in just straight groovy, I am sure you could include the specific grails jar in your classpath.

import grails.converters.*

def jsonArray = JSON.parse("""['foo','bar', { a: 'JSONObject' }]""")
println "Class of jsonArray: ${jsonArray.class.name}"
jsonArray.each { println "Value: ${it}" }

FYI, it appears from the mailing list this was added around 1.0 RC1.

Building JSON is super easy too in grails/groovy using the render as. And don't forget to import grails.converters.
render Book.list(params) as JSON

Update: Read my recent article on testing REST Services that return JSON using groovy and httpbuilder.

Friday, July 11, 2008

Groovy Threads and MetaClass example

* Update - the code has been updated. The original test was incorrect and was producing a false positive. What you see now is the correct way.

It's been awhile since I have been able to play around with grails/groovy. Now, instead of using a pretend app to learn grails/groovy, I have teamed up with Jeff Black, Chad Gallemore, and Sam Jones (fellow office co-workers here in Joplin, MO) to rewrite an existing small internal java webapp using grails. It's a perfect application for grails and so far we are loving it.

Early on, we needed to figure out 2 things:
1) Groovy way of creating Threads
2) Writing an integration test for a Groovy Service

Groovy Threads
Below is a summary of our service that shows how to start new threads in groovy.

class ProjectService {
def discover() {
Project.list().each {
Thread.startDaemon {
jobService.update(it)
}
}
}
}
Service Integration test with MetaClass
Here is the integration test I wrote that tests the above Service.
void testEmptyProject() {
def called = false

Project.metaClass.static.list = {[]}
Thread.metaClass.static.startDaemon = {Closure c -> c.call()}
JobService.metaClass.update = {called = true}
new ProjectService(jobService: new JobService()).discover()

assertFalse('updateJobs should not have been called', called)
}

The first missing key for me was the static keyword on metaClass since in the service I am calling Project.list() and Thread.startDaemon(). The second mystery was how to mock out Thread.startDaemon() since there could be, and was, a race condition between the update closure setting called = true and my assertFalse.

Thanks Chad for the suggestion of using metaClass. I also got a lot of help from Glenn Smith's blog about testing controllers and Dustin's groovy thread example.

Thursday, July 3, 2008

Maven not downloading latest snapshots or releases

Ever had issues with maven not downloading the latest snapshots when you know for a fact new snapshots are available? Or your CI environment just deployed a new release (2.0), but when another Hudson job builds, maven does not download the latest 2.0 release artifact. Want an automated solution so you don't have to manually delete the artifacts from your local repository just so maven will download the latest?

Force maven to download latest snapshots
Our company uses Hudson for our automated CI environments. Our project basically has two jobs. The first job checks out and builds HEAD when modified and deploys SNAPSHOT WARs to our companies maven2 repository (artifactory). The second job, which builds nightly, uses maven to download the SNAPSHOT WARs from artifactory, creates an EAR, deploys it to JBoss, and runs integration tests. By default, maven will check once a day for changes to snapshots, so when our second job was triggered, maven inside hudson was not downloading the latest SNAPSHOT WARs.

The solution was to append the -U in the maven goals (run mvn -help). It stands for update-snapshots and tells maven to update all snapshots no matter what.


Force maven to download latest releases
Our next problem was when we created a branch and started creating release artifacts such as 2.0. Unfortunately the description given by maven for the -U option is incorrect (or at least in v 2.0.9), "Forces a check for updated releases and snapshots on remote repositories". As much as I tried, the -U option wouldn't work in our hudson job to force maven to download the latest non-snapshot releases.

The only current solution I know of is to use the maven-dependency-plugin and its goal purge-local-repository. So in your maven goals at some point execute mvn dependency:purge-local-repository and maven will physically delete your projects artifacts from the local repository (/home/user/.m2/repistory) and its transitive dependencies (I think). I tried setting the actTransitively to false and it didn't work for us so I just removed it. I also set verbose to true so I could see what maven deleted in Hudson's console output.



The pipes are used to separate out different goals to isolate its classpath or properties. That way we can skip tests in one run, and then run them in the next all in the same goals section.

Thursday, June 26, 2008

DVCS? I'm sold

As a casual reader of DVCS advantages like git and mercurial, I think I am ready. Motivated by a great article shared by Kit Plummer, I am sold and ready to start using it. Unfortunately, it's probably not going to happen any time soon at work until a) IT supports a mercurial server like they do svn or b) I start a new project were new decisions like language (ruby/groovy) and source control can be made and I'm not hindered by "we are already using svn, why switch" or "I just got comfortable with svn" or "every team member would have to learn mercurial". Reminds me of my favoriate Zed Shaw quote from this article:

"This folks is the classic problem with programmers today. They absolutely refuse to learn anything new unless they can see that learning the new hotness will give them an immediate 200% boost in salary or get them hot honeys at the next conference."
Now that I see that Hudson has a mercurial plugin and Jira does as well, then technically speaking, I can't think of any other reason not to switch.

Sunday, June 15, 2008

How to FTP artifacts in maven2

If there is one word a team should keep in mind when building a CI (Continuous Integration) environment it's Automate, Automate, and Automate (see Production-ready software, on-demand). Most teams, including mine currently, performs their releases manually; creating the branch, incrementing the POMs, uploading the artifacts to a repository, etc. This is one reason I wrote about using the maven release plugin to automate this process. I will say that having a CI server such as Hudson only helps in making automation easier.

My team also creates an official release about once every two months and supports that release anywhere between 1-2 weeks to months. It's also pretty intense during those 1-2 weeks when we have co-workers on-site installing and supporting our software in a completely different timezone and with limited access to the phone and internet.

Over the weekend I wrote a simple maven2 pom to help automate FTP'ing the release artifacts to an FTP server. Last week we created a branch and a job in hudson that builds that branch. After it successfully builds we use the assembly plugin to package our EAR, documentation, SQL files, and everything else into a single folder. In the past, a guy would then manually copy these files to the FTP server, which was used by the on-site team to download the latest artifacts containing improvements and bug fixes. This release we wanted to automate this step to increase our response time for the on-site team.

First Try using GMaven Plugin
I knew that Ant had an FTP task and I love doing Ant in Groovy because its so much easier so I decided to first try the GMaven plugin (maven groovy plugin). It was a short trip since the FTP task is an optional library in Ant and you have to include the jar in your POM, but I could never get the gmaven plugin to recognize the dependency (see MGROOVY-152). Look at the Jira issue attachments if you want to reference my POM as an example. It's too bad I couldn't get this to work because it would have been sweet:

log.info('Entering in ftp script')

def config = [server: 'localhost',
remotedir: '/home/jlorenzen/ftptest',
user: 'ftp',
password: 'ftp'
];

ant.ftp(config) {
fileset(dir: '
/home/jlorenzen/Documents')
}

Second Try Maven AntRun Plugin
I was finally able to get this to work using the maven-antrun-plugin. Here is my POM
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>ftp</id>
<phase>install</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<ftp server="localhost"
remotedir="/home/jlorenzen/ftptest"
userid="ftp"
password="ftp">
<fileset dir="/home/jlorenzen/Documents"/>
</ftp>
</tasks>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>ant</groupId>
<artifactId>ant-commons-net</artifactId>
<version>1.6.5</version>
</dependency>
<dependency>
<groupId>commons-net</groupId>
<artifactId>commons-net</artifactId>
<version>1.4.1</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>

Sunday, May 18, 2008

Accessing Hudson Variables with a Free-Style Project

Here is a great article on creating a manifest file containing hudson variables so that you can know exactly what build number and svn number built your artifact. However, in my case I wanted to create a properties file under my maven2 project under src/main/resources and let maven filtering take care of the rest. But for some reason it wouldn't work. After some further research, I eventually found out that free-style projects appear to not work the same as a maven2 project in hudson when trying to access hudson variables.

In summary, hudson variables work as expected when you create a maven2 project in hudson. With a free-style project you have to perform an extra step. Why, I don't know; I am sure there is a good reason. It's just too late to figure it out.

Here are the extra steps you can follow to create a properties file containing hudson values.

Update POM
In your projects POM or parent POM add the following properties at the bottom

<project>
....
<properties>
<build.number>${BUILD_NUMBER}</build.number>
<build.id>${BUILD_ID}</build.id>
<job.name>${JOB_NAME}</job.name>
<build.tag>${BUILD_TAG}</build.tag>
<executor.number>${EXECUTOR_NUMBER}</executor.number>
<workspace>${WORKSPACE}</workspace>
<hudson.url>${HUDSON_URL}</hudson.url>
<svn.revision>${SVN_REVISION}</svn.revision>
</properties>
</project>
Create a properties file
Create a application.properties file under your maven2 projects src/main/resources directory.

Now add this to it

build.number=${build.number}
build.id=${build.id}
job.name=${job.name}
build.tag=${build.tag}
executor.number=${executor.number}
workspace=${workspace}
hudson.url=${hudson.url}
svn.revision=${svn.revision}

Saturday, May 17, 2008

Groovy Sort List

I am posting a simple example on how to sort a list in groovy because the examples google knows about aren't what I was looking for. With some deep digging I was able to find a clue that eventually solved my problem.

It's real easy to sort a list of numbers

assert [1,2,3,4] == [3,4,2,1].sort()

Or even strings

assert ['Chad','James','Travis'] == ['James','Travis','Chad'].sort()

But this was my example

class Person {
String id
String name
}
def list = [new Person(id: '1', name: 'James'),new Person(id: '2', name: 'Travis'), new Person(id: '3', name: 'Chad')]

list.sort() returns James, Travis, Chad

The solution is ridiculously simple (not that I thought the previous sort would work; I have to be realistic; groovy can't do everything for me).

list.sort{it.name} will produce an order of Chad, James, Travis.

In the previous example note the use of the sort closure sort {} verses the sort() method.

Now I am not sure, off the top of my head and without a Groovy book handy, the simplest way to sort case insensitive.

assert ['a1','A1'] == ['A1,'a1'].sort{fancy closure}

Saturday, May 10, 2008

Finally OpenSolaris Installed

Finally, after 4 attempts, I got a half way decent install of Solaris. My previous attempts included installing Solaris Express Developer Edition on VMware Server 1.0.4 and VirtualBox 1.5. Both had major issues but I had the best luck with VMware Server, it just took like 15 minutes to boot and I couldn't install vmware tools. So at JavaOne 08 they announced http://www.opensolaris.com with the version 2008.05. So I thought I would give it a try and their install documentation seemed very thorough (see how to install on virtualbox). Unfortunately, installing this in virtualbox did not work (would't boot) and so I decided to give it one more shot on VMware Server and I guess I got lucky because it actually worked to my surprise. And it actually boots rather fast compared to my previous experience with solaris.

Since it uses GNOME it reminded me a lot of my ubuntu system. I was even able to successfully install vmware-tools which I got really excited about because I would have bet against this previously. Compiz is installed by default but I was unable to change the settings yet (it kept rebooting for some reason). Maybe it's because I haven't enable the nvida drivers yet; which was also a surprise that the nvida drivers where already installed.

Here are some features compared to Linux. I am not a file system guru but ZFS sounds interesting.

Anyways so far so good. I do miss sudo. I am now able to start working efficiently on providing hudson as a package in solaris. I am assuming that once I get done the below search will actually return a result (me <-- crossing my fingers).

Thursday, May 8, 2008

Why again am I not using JDK 1.6?

Why in the world am I still using jdk 1.5? It's almost 4 years old. Yesterday it just kind of hit me and I said to myself, "James, why not start using jdk 1.6 or java 6?". Is there anything technically preventing jdk 1.5 users from using jdk 1.6? Back in the day, weblogic 6.1 only worked with jdk 1.3 and when I wanted to start using jdk 1.4 it just wouldn't work. So for some reason I still had that same mindset.

So I started to do a little research and what I found was pleasantly surprising and I immediately downloaded the latest version of java 6 and started using it (verdict still out on that but so far so good). JBoss 4.2.1 seems to start just fine with java 6.

So what exactly did I find? First, straight from the java 6 home page is the answer to the question: Q: How is Java SE 6 different from the previous version (J2SE 5.0): what are the improved and updated areas, such as functionality, security, performance?

...the release delivers dramatic out-of-the-box benefits without any coding changes or even a re-compile necessary. Simply running existing Java applications on this latest release is all that is needed.
Now my interests are peaked. So how come I haven't heard this yet. Perhaps I was just content being able to upgrade from 1.4 to 1.5.

I think this article by Rick Ross on javalobby has some great points. Here are the ones that interested me:
  • Java 6 incorporates over 300 bug fixes
  • Java 6 features general performance improvements while preserving compatibility with older versions
  • In many cases you may find that your applications run as fast or faster under Java 6 as they would if you spent significant effort tuning heap, thread and garbage collection parameters under Java 5.
That last point hits me right at home since we recently went through "significant effort tuning heap, thread, and gc params."

So what about performance? This article is in an depth analysis of the performance improvements. The summary is it's faster.

So like mentioned above I downloaded java 6 update 6. Jboss 4.2.1 started just fine. Today I plan on building our EAR using maven 2.0.9 and deploying it to see if there are any issues. See the official compatibility page here. Specifically it states "Java SE 6 is upwards binary-compatible with J2SE 5.0 except for the incompatibilities listed below. Except for the noted incompatibilities, class files built with version 5.0 compilers will run correctly in JDK 6."

Thursday, May 1, 2008

Pimp my Linux: Maven2 Bash Completion

Inspired by this bash completion for ssh, I wanted to see what it would take to do something similar for maven2. Thanks to google it's already been done.

It's not perfect, but it's a start. For example, the choices are hard coded in the script. And when I tab after typing mvn assem it outputs mvn assembly\:assembly. Shouldn't be too hard to find that extra slash.

Here are some thoughts on how one could improve it. First, maybe it could be improved for at least the core plugins by searching the local repo in /m2/repository/org/apache/maven/plugins. Or searching the local repo for any pom that contains a packing value of maven-plugin (maven-plugin). Combine that with the help plugin to get the possible goals and all their parameters and one could probably create a pretty slick and useful bash completion for maven2.

For example, you can get everything you need for any plugin if you know the groupId and artifactid by running this:
mvn help:describe -DgroupId=org.apache.maven.plugins -DartifactId=maven-war-plugin -Dfull=true

I say all this, but I use a lot of alias's for my maven commands. Here are a few of my favorites:

alias mci='mvn clean install'

alias build='mvn clean install -Dmaven.test.skip=true -Dpmd.skip=true -Dcheckstyle.skip=true'

Using Maven War Overlays to extend Hudson

I just recently found out about one of the neatest features of the maven-war-plugin called WAR Overlays. Basically it provides a very simple way to merge multiple WARs together to create an Uber WAR. You simply add a WAR as a dependency in your POM verses adding a JAR, and the maven-war-plugin will take care of the rest. My team uses this ability to extend the JSPWiki WAR to add in our wiki pages. The result is an Uber WAR including the JSPWiki stuff and our wiki pages. Then when a new JSPWiki WAR is available we just update the dependency version in our POM.

So for an example, I am going to demonstrate extending my favorite CI tool Hudson since it's freaking awesome and is downloaded as a WAR. Don't try this at home since hudson already provides the ability to extend it using plugins (which also rocks by the way).

Create a Simple WAR Project
First, create a war project using maven-archetypes. Execute mvn archetype:generate and select #18. Run mvn clean install to ensure it builds correctly (I am using maven v2.0.9).

Install hudson into local repository
Next we need to be able to consume the hudson war in our pom and since I am unware of the hudson war being available on any external repository we are going to just install it manually into our local m2 repository.

  • Download the latest hudson war
  • Install hudson.war into your local repository using the mvn install plugin by running: mvn install:install-file -Dfile=hudson.war -DgroupId=hudson -DartifactId=hudson -Dversion=1.0 -Dpackaging=war
Consume the hudson.war in your pom
Open up your war's parent pom and add the hudson war as a dependency.
<dependency>
<groupId>hudson</groupId>
<artifactId>hudson</artifactId>
<version>1.0</version>
<type>war</type>
<scope>runtime</scope>
</dependency>
Next, since we want to run hudson in embedded mode without a container we need to add to the generated manifest file the Main class since I am too lazy to figure out how to include the original hudson manifest file (even though I am sure its possible). Include the following in your build section:
<build>
<finalName>mywar</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<archive>
<manifestEntries>
<Main-Class>Main</Main-Class>
</manifestEntries>
</archive>
</configuration>
</plugin>
</plugins>
</build>
Build and Run it
Now build the war again: mvn clean install. And your WAR should be merged with the hudson war. In fact for verification, your simple war contained an index.jsp under src/main/webapp and if you extract the war under target you will see your index.jsp.

Run: java -jar target/mywar.war
and go to: http://localhost:8080/index.jsp

If you are doing this on a serious level, you might want to look into the maven-cargo-plugins ability to create Uber wars.

Friday, April 18, 2008

Reply: JBI Misses the Mark by Ross Mason

I would like to reply to Ross Mason's article titled JBI Misses the Mark. It was too hard to include this in a comment, so I decided to post my reply here.

Ross,
As a JBI Component Developer and user of ServiceMix and OpenESB I am going to try my best to give some answers to your points with the goal of discussion and not persuasion. I am not a "die-hard" fan trying to push JBI adoption. So hopefully I will come off as unbiased and honest as possible.

"It’s been a few years since I read the JBI specification, but here goes."

As evidence by the different ways JBI was implemented by Apache and Sun, just reading the spec alone I think is not sufficient to disregard JBI. For example, I remember when I first read the spec and looked at ServiceMix, I was totally confused. I was asking myself, where are all the WSDLs? Most of the comments I have read from JBI opponents have usually just read the spec or heard from co-workers and have actually never used or seen JBI used. Now I am not saying this person is you Ross; you have obviously done your homework. I believe the biggest failure of JBI has been communicating clearly what JBI is and how to use it. It took me many months until I felt like I actually "understood" JBI. And before that moment happened I thought I knew what JBI was only to find out something new and that I never actually did.

"Adaptive Integration"

Unless I misunderstood it, your definition of Adaptive Integration fits well with my experience with JBI. Using JBI I can build "best-of-breed" integration solutions using any number of the JBI components available with ServiceMix (SM) or OpenESB (OESB). If I need to integrate with JDBC I can do that; or RSS, XMPP, SIP, Corba, and CICS. Perhaps my lack of knowledge of Mule capabilities is making me naive here, but to me I can use JBI to bridge multiple systems together.

"JBI attempts to standardize the container that hosts services, the binding between services, the interactions between services, and how services and bindings are loaded. The problem occurs when the APIs around all these things makes assumptions about how data is passed around, how a service will expose its interface contract and how people will write services."

In a nutshell the only thing I think JBI standardizes on is the interactions between components or put another way the format of a message that goes over the NMR. Beyond that its up to the component developers to do what they wish. For example, with the RSS BC, there are no assumptions that need to be made about incoming requests from the NMR; it's a standard JBI message. Doesn't matter if it came from a SIP, HTTP, Corba. To me that is the beauty of JBI. Once I create my JBI layer in my component anybody in the container can call me.

"Xml messages will be used for moving data around"

As stated above, JBI doesn't say if your legacy system doesn't know XML, then you can't use JBI. The exact opposite is true. To me this is the perfect fit for JBI and Binding Components. My definition of a Binding Component is a translator. It's sole job is to translate protocol X to JBI and translate JBI to protocol X. Perhaps what you are suggesting is that some message types don't "convert" easily to XML since that is what essentially messages over the NMR are. So in your example, a Cobol CopyBook message can't be or shouldn't be attempted to be converted to XML. Not sure I would agree since I would think, given a schema, anything could be represented in predictable XML.

"Data Transformation is always XML-based"

Since everything is converted to XML to be passed over the NMR, the only transformation needed for that is XML. So what you is true, but to a JBI user I guess it doesn't matter. I guess on the other side, if the NMR allowed for other message types, then yes I guess you would need more transformers, but I guess with JBI the transformers are the Binding Components.

"Service contracts will be WSDL"

This is a very true statement and I am not exactly a WSDL/XML lover; unless I don't have to deal with it. I think the original intent was there would be a lot of tooling support for JBI to alleviate that burden from the JBI user. To me Sun has done a better job of this than SM, but this is definitely a drawback to JBI. However SM has all but eliminated WSDL's in JBI so users of SM don't need tooling support since they don't actually require WSDLs.

"No need for message streaming"

Another very true statement of there not being an "easy way to stream messages in JBI". With XML being so verbose I believe SM or OESB would probably dislike a 3 MB XML file getting pumped through it. Especially if that gets copied over and over again from one component to the next. This is where JBI might have difficulties scaling. As a colleague pointed out, the content of a JBI message is XML. And that XML has to be marshalled and unmarshalled. And that process is very heavy and even more so if there are reliability/durability constraints on the Message.

However, I think its a current tradeoff in order for all components on the NMR to communicate in a predictable fashion. Since that is the main concept of JBI: the ability for all the JBI components communicate. They have to communicate some how. One component can't decide that he expects format A and another decides he requires format B. Then nothing would work. So it sounds like to me XML was that language to guarantee interoperability between components.

On a side note, JBI components have the ability to set the content of a JBI message that goes on the NMR as anything implementing the interface javax.xml.transform.Source. Meaning that they can use streaming XML with the SAX and Streaming implementations. However, once consumed, that stream is gone and is probably represented as DOM (ouch) somewhere in the process. Our components use the Streaming impl when creating JBI Messages, but that doesn't mean the others do.

Either way, I have to think that smart people like Peter Walker, yourself, and other spec leaders can't come up with something to resolve this and get the best of both worlds.

"You need to implement a pretty heavy API to implement a service"

I disagree that JBI users have to know more about JBI than is necessary, but then again its hard for me to play that person since I am also a component developer. But most of the people that I have taught how to use JBI have been successful in using it without having to know as much as me. I know in SM you can expose a POJO pretty easily. In OESB you can use the Scripting Engine or create an EJB Service to be used in JBI. From what it sounds like though, its much easier with Mule.

Again I have not had a hard time teaching non JBI users about JBI. However I skip the spec and jump straight to a demonstration of how you can use JBI. This seems to help. I remember the first video I watched when the light bulb finally went off. It was a video on how to use the Sun SMTP BC and creating a Service Assembly with Netbeans. After I watched that video everything just came together for me and my co-workers. This is basically how I start off explaining JBI: It's best explained with an example rather than a bulleted powerpoint.

"It’s not actually that clear what a service engine is in JBI."

Touche. I hear this a lot, even from me. Again, as I stated earlier, the JBI community has done a poor job of training and communicating to its users and developers. My first definition of a Service Engine was basically a Service. I still don't have a good definition of an SE; usually I will just talk about what existing SEs are out there like the BPEL and XSLT SE. To me Binding Components are easy to define so an SE is just everything else (granted not exactly the best definition).

"The JBI specification is obfuscated with lots of new jargon"

Yes, yes, and yes. Again poor job of communicating. However, the JBI user should not be concerned with most of this jargon; only the JBI developer. For example, the JBI user shouldn't need to know about what the NMR is or what it does; but it's important for JBI developers to know.

"This breaks the re-use story since, if I use a JBI Binding Component in one container doesn’t mean it will behave the same way in another container."

Ideally JBI was suppose to be this loving community of predictable containers and components. Unfortunately that hasn't exactly happened. I will say this though, from my personal experience of using different components on different containers, for the most part most components and Service Assemblies work in a predictable way on other containers. For example, I have successfully used many SM components in OESB and and I have heard of Sun components working in SM. However the failure is you can't intermix Sun and SM components (or at least from what I know). For example, you can't use the SM HTTP BC with the Sun BPEL SE. To me this wasn't a failure of JBI; only how the JBI container was implemented. In a perfect world this would have happened but it hasn't.

Ironically it actually doesn't take that much to get your components to work on both SM and OESB. Of the four components we have written (RSS, SIP, XMPP, and UDDI), XMPP has been modified to work on both. We haven't had a chance to ensure the other 3 can, but hopefully we can get to that or the open source community could help. At the time of development we didn't actually understand JBI that well and consequently when we had our components originally working in SM and tried to port them over to OESB, they worked but only with other SM components. In order for our components to work with Sun components we had to basically rewrite the JBI layer. Unfortunately, due to time constraints it was not our current goal to keep backwards compatibility with SM. It wasn't until after we finished all this that we actually understood JBI and what it meant to create a portable component.

"If you look at every JBI implementation each has written their own JMS, FILE, HTTP, FTP etc Binding components… not exactly what I’d call re-use."

You are exactly right. Each container has implemented their own components for File, JMS, HTTP. Most likely because of the issues I mentioned above. However I don't think this doesn't mean JBI can get there. If these projects wanted to make them portable than we wouldn't have this problem. Instead they focus their efforts on other things.

As a side note, since my group was using OESB, when I joined a new group, we needed to get OESB working on JBoss. This currently was not supported and my previous group was able to work with Sun to get this working and its currently being incorporated in OESB.

Summary
IMHO I think a lot of assumptions where made about JBI, but you also addressed some of the issues with JBI. One reason I think JBI has not been adopted is the fact that it didn't have the backing early on of IBM and Bea. From what I recall they dropped out pretty early once they realized it was going to be a competing product. Another reason for a lack of adoption is I think again the lack of communicating what JBI really is.

I hope I didn't reiterate responses you have heard in the past. I wasn't around when JBI was originally approved, so I don't know a lot of the history. These are just my opinions I have collected through developing components and using SM and OESB.

I hope it helps and I look forward to your response.

Wednesday, April 16, 2008

Sharing your folders in Ubuntu

Today I needed the ability to modify files on my Linux host machine from my Windows VM instance running in VMWare Server. This was actually pretty easy and once I found this article I was able to get it working in about 5 minutes.

Enjoy

Tuesday, April 15, 2008

Using Grails and Glassfishv3

In case you missed it, an important feature is being worked on in order to support direct deployment of a Grails application in Glassfish v3 (see Getting Started with Grails on Glassfish). According to Mr. Gupta, they are working towards being able to deploy a Grails app to GFv3 using the grails commands (currently Grails uses Jetty). Also they are working on deploying a Grails App for production purposes without having to create a WAR; which is the current method. Both of which will definitely improve developer productivity when using Grails and GFv3.

And since it was recently announced that GFv3 can run on OSGi, I think GFv3 continues to look very interesting.

Thursday, April 10, 2008

Update: Always specify a version for Maven2 plugins

I previously wrote about specifying a release version for maven plugins in your POM files to have a more stable build and stop depending on SNAPSHOT plugin dependencies. It now seems with the latest version of maven2, version 2.0.9, that maven now comes with release versions for all the commons plugins (see the release notes).

"MNG-3395 - Starting in 2.0.9, we have provided defaults in the super pom for the plugins bound by default to the lifecycle and a few other often used plugins. This will introduce a bit of stability to your builds because core plugins will not change magically on you when they are released. We still recommend taking control of your plugin versions via pluginManagement declarations as this is the most robust way to future proof your builds. Defaulting the plugins in the superpom was a step towards introducing stability for small builds and new users. A full table of the versions used is shown in the next section."

Note how they still recommend specifying release versions in the pluginManagement section of your POM.

Update: Using the JBI JavaEE Service Engine

Previously I wrote about how to use the JBI JavaEE Service Engine to increase performance when orchestrating EJB Services created in Glassfish with the BPEL Service Engine in OpenESB. That was almost 8 months ago and Netbeans and OpenESB have changed a lot and consequently I needed to update it.

So I would like to provide an update using the latest Netbeans (20080402).

Again, if you haven't already, go ahead and download and install OpenESB which includes Netbeans v6 and Glassfish v2.

  1. Create an EJB Module
    • File > New Project > Category Enterprise > Projects EJB Module
  2. Create a new Web Service
    • Right click on your EJB Module project
    • Choose New > Web Service
    • Add a new Operation
  3. Create a new BPEL Module
    • File > New Project > Category SOA > Projects BPEL Module
  4. Create a new BPEL Process
    • Right click on your BPEL Module project
    • Choose New > BPEL Process
  5. Create a new SOAP WSDL
    • Right click on your BPEL Module project
    • Choose New > WSDL Document
    • Go through the wizard and for the Binding Type make sure its SOAP
  6. Configure your BPEL Process by dragging the SOAP WSDL onto the left side of the BPEL Process.
    • Add the standard "buttons" to the BPEL Process Sequence
      • Receive
      • Assign
      • Invoke
      • Assign
      • Reply
  7. Utilize the EJB Web Service in the BPEL Process (this is where things are different than in past versions)
    • Right click on the Web Service under the EJB Module and select Generate and Copy WSDL
    • Uncheck Do no copy if it is checked.
    • Expand your BPEL Module and copy the WSDL into the BPEL Module's src directory.
    • Now you should have a copy of the Web Service's WSDL in your BPEL Module.
    • Drag this new WSDL onto the right side of the BPEL Process so that we can tie the Invoke call to it.
  8. Create a Composite Application
    • File > New Project > Category SOA > Projects Composite Application
    • Right click on the new Composite Application and select Add JBI Module
    • Add both your BPEL Module and EJB Module
    • Clean and build the Composite Application (Right click > Clean and Build)
    • View your CASA file by double clicking on the Service Assembly file under your Composite Application.
  9. Your CASA file should look something like this
  10. Now you should be able to start Glassfish and deploy your Composite Application. Now your BPEL Process will utilize the Java EE Service Engine. FYI, do not deploy the EJB Module separately since its included in the Composite Application.

Wednesday, April 9, 2008

Another Netbeans 6 tip

I previously wrote about not installing multiple versions of Netbeans 6 and now I would like to include another tip.

When installing a new version of Netbeans 6, while uninstalling your previous version make sure you remove the .netbeans directories. Otherwise the support for creating Tests in Composite Applications doesn't work.

In fact just delete the following before each new install and you will have better luck:

  • rm $USER_HOME/.asadminpass
  • rm $USER_HOME/.asadmintruststore
  • rm -rf $USER_HOME/.openesbinstaller_[machine_name]
  • rm -rf $NETBEANS_HOME/.netbeans
  • rm -rf $NETBEANS_HOME/.netbeans-derby
Note this is on Linux using the most recent version 20080402. Not sure where the directories are placed or called on windows. Also I believe older versions of Netbeans 6 placed the .netbeans* directories under $USER_HOME, so you might check there as well.

Tuesday, April 8, 2008

IntelliJ Idea able to open Maven2 POMs directly

Not sure all my Idea co-workers are aware, but if they are its worth repeating. Now in Idea 7 (I am specifically using 7.0.3) you can quickly open your project in Idea by pointing it to your projects maven2 POM. It works pretty slick and now that saves me from having to run mvn idea:idea. It also supports the ability to synch with the maven structure on startup in case you add new dependencies to your POM, you shouldn't have to perform any magic to get Idea to recognize them.

Jeff Black reports the same ability with Netbeans 6. And here is the original Idea post describing this new feature.

Using WebEx to demo from your Linux machine

In my unending quest to figure out how to efficiently do a demonstration from my Linux machine, I finally was fortunate enough to come across a solution. Today I did a demonstration for a friends company and they use WebEx. Luckily enough for me I reluctantly tried it first on my Linux machine, but not expecting it to work, and it worked! I was able to do a demo from my Linux machine and it actually worked seamlessly.

So if your company consists of different types of operating systems (windows and linux) and you need software that works on both for employees to share and view demonstrations then take a look at WebEx. It is not free so that is a bummer, but neither is Goto Meeting (which is what we currently use).

Friday, April 4, 2008

New simple Linux tip for terminal: CRTL+L

I just found this out (basically because I saw a coworker clear his terminal window with a keyboard shortcut on his Mac, as opposed to running the clear command).

If you want to clear your terminal screen hit: CTRL+L.

Ubuntu Remote Desktop

As an update to my premature post yesterday, I was able to finally use Ubuntu's Remote Desktop ability to share my desktop to a windows machine in order to perform a demonstration. VNC Server did not work and neither did No Machine (for me that is).

Luckily someone mentioned Ubuntu's Remote Desktop and I was able to VNC from windows using RealVNC's VNC Viewer and presto. Now I will just join a session in Yugma, share out my windows desktop, which is displaying a VNC session into my linux laptop. Wew!

I can't wait for the day when I can get a Mac or Yugma can share a linux desktop.

Thursday, April 3, 2008

Using Yugma to demo from your Linux machine

Currently our company pays to use Goto Meeting to coordinate meetings. Unfortunately, Goto Meeting does not work in Linux and consequently I usually connect to meetings from my Windows VMWare instance.

Today I had the need to do a demonstration with offsite coworkers and I needed to do it within Linux. Since I have used the free service called Yugma in the past and it supposedly works in Linux I decided to spend some time trying to get it to work.

Well see for yourself.
It took some time, but with their free online chat Help Desk (which unexpectedly was amazing) we eventually got it to work.

I image others wouldn't have as many issues as myself, but just in case here are some of the steps I had to perform to get this to work.

First, if you haven't already created an account, go ahead and create one (http://www.yugma.com). Then try visiting the Start a Session page. If you haven't already installed the software it should provide a link to download yugma. If not then you have issues like I did. Basically I had an old java version that FireFox was using and I had to update it to JRE 6. I followed their instructions here, but that wasn't enough.

With the help of the Yugma Help Desk I still had to do the following:

  • sudo apt-get install sun-java6-jre sun-java6-plugin sun-java6-bin
  • update-alternatives --list java
    this should print out
    /usr/bin/gij-4.1
    /usr/bin/gij-4.2
    /usr/bin/cacao
    /usr/lib/jvm/java-6-sun/jre/bin/java
  • sudo update-alternatives --config java
    [sudo] password for jlorenzen:

    There are 4 alternatives which provide `java'.

    Selection Alternative
    -----------------------------------------------
    1 /usr/bin/gij-4.1
    2 /usr/bin/gij-4.2
    3 /usr/bin/cacao
    *+ 4 /usr/lib/jvm/java-6-sun/jre/bin/java

    Press enter to keep the default[*], or type selection number: 4
    Using `/usr/lib/jvm/java-6-sun/jre/bin/java' to provide `java'.
  • Then restart FireFox. Revist the Start a Session page and you should be golden.
As a developer I really like Yugma because it has a free version that allows you to invite at least 10 people. Which is perfect for developers. And since our Goto Meeting licenses are controlled its difficult to get a Goto Meeting scheduled on short notice (Although Josh Hoover is the King of Goto Meetings). The other nice thing about Yugma is its easy to start and share a session. I remember in Windows Outlook you could even install a package that put buttons in Outlook and set it all up; very handy.

Friday, March 21, 2008

Automated Performance Tests using JMeter and Maven

Learning how to write and automate performance tests isn't easy. Perhaps that is why I have never actually wrote and automated performance tests; until now. For my latest open source side project, sass4j, I wanted to start the project with some automated performance tests. Since we were already using maven2, I wanted to find a maven2 plugin that allowed me to write complex performance tests and automate them easily in a continuous integration environment such as Hudson. Since I had heard good things about JMeter and there was an existing JMeter plugin, I decided on those technologies. Unfortunately the path this took me down was rather long and annoying, but I eventually figured everything out and that is the reason I would like to document the steps for others to reproduce in less time.

But first, why would anyone want to spend unnecessary time setting this up? The big advantage I think, is having the ability to compare nightly test results with a baseline and compare them with expectations. If my latest changes caused a major decrease in performance when I only added a few lines of code, then perhaps something is wrong. The second advantage is the sooner a performance issue is found the cheaper it costs to fix it. Think about the change set a developer has to look at if nightly performance tests were ran, compared to finding a performance issue 6 months from when the bug was introduced. With the former scenario, I only have to look at what has changed in the last 24 hours.

Now onto how to actually do this. I had two references in getting this done: The Official Apache JMeter Maven Plugin Site and a blog on AMIS by Robbrecht van Amerongen. Both of which are incomplete, but combined provide enough information.

Here is an outline of what you need to do. End to end this should take you about 15 minutes to setup; compared to the hours I spent I think you are getting a deal. Also you need to think about doing this in your artifactory server or company maven repository and not locally (doing it locally only helps you and not your entire company).

1) Download the JMeter maven bundle I created containing all the necessary artifacts
2) Install JMeter Plugin dependencies
3) Install the JMeter Plugin
4) Install JMeter
5) Update your maven project
6) Create jmx files and run mvn verify

The first 3 steps are necessary because there are specific dependencies the JMeter plugin requires that are not available on any public maven repo I can find, nor is the JMeter plugin.

1) Download the JMeter plugin bundle

  • Click here to download the JMeter plugin zip file
  • Unzip it
2) Install JMeter Plugin dependencies
  • cd to the extracted folder jmeter
  • run the following mvn commands to deploy the jar files locally. Obviously update the location of your local maven2 repository and keep in mind the file paths are specific to linux. Again do this once in your company's maven repository.
  1. mvn deploy:deploy-file -DgroupId=org.apache.jmeter -DartifactId=jmeter -Dversion=2.2 -Dpackaging=jar -Dfile=jmeter-2.2.jar -DpomFile=jmeter-2.2.pom -Durl=file:///home/jlorenzen/.m2/repository/
  2. mvn deploy:deploy-file -DgroupId=jcharts -DartifactId=jcharts -Dversion=0.7.5 -Dpackaging=jar -Dfile=jcharts-0.7.5.jar -Durl=file:///home/jlorenzen/.m2/repository/
  3. mvn deploy:deploy-file -DgroupId=org.apache.jorphan -DartifactId=jorphan -Dversion=2.2 -Dpackaging=jar -Dfile=jorphan-2.2.jar -Durl=file:///home/jlorenzen/.m2/repository/
  4. mvn deploy:deploy-file -DgroupId=org.mozilla.javascript -DartifactId=javascript -Dversion=1.0 -Dpackaging=jar -Dfile=javascript-1.0.jar -Durl=file:///home/jlorenzen/.m2/repository/
3) Install the JMeter Plugin
  • Unzip the maven-jmeter-plugin.zip file that was included in the JMeter plugin bundle.
  • cd to the maven-jmeter-plugin folder
  • run: mvn install
  • This will install version 1.0 of the maven-jmeter-plugin. It's important we install a release verses a snapshot because you don't want your project to depend on snapshot plugins because this could have nasty side effects for building and releasing.
4) Install JMeter
Now that you have the plugin installed you can actually start modifying your projects pom to use it. You first need to install JMeter because we will need a jmeter.properties file, some XSL files, and you are going to need it to create the .jmx files.
5) Update your maven project
  • Under your project create the directory: src/test/jmeter and src/test/resources
  • Copy the jmeter.properties file from the JMeter bin folder to src/test/jmeter.
  • Update the property jmeter.save.saveservice.output_format in the jmeter.properties file from csv to xml.
  • Copy the files jmeter-results-detail-report_21.xsl and jmeter-results-report_21.xsl from the JMeter extras folder to src/test/resources
  • Add the following to your POMs build/plugins section
<build>
<plugins>
<plugin>
<groupId>org.apache.jmeter</groupId>
<artifactId>maven-jmeter-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<id>jmeter-tests</id>
<phase>verify</phase>
<goals>
<goal>jmeter</goal>
</goals>
<configuration>
<reportDir>${project.build.directory}/jmeter-reports</reportDir>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>xml-maven-plugin</artifactId>
<version>1.0-beta-2</version>
<executions>
<execution>
<phase>pre-site</phase>
<goals>
<goal>transform</goal>
</goals>
</execution>
</executions>
<configuration>
<transformationSets>
<transformationSet>
<dir>${project.build.directory}/jmeter-reports</dir>
<stylesheet>src/test/resources/jmeter-results-detail-report_21.xsl</stylesheet>
<outputDir>${project.build.directory}/site/jmeter-results</outputDir>
<fileMappers>
<fileMapper implementation="org.codehaus.plexus.components.io.filemappers.FileExtensionMapper">
<targetExtension>html</targetExtension>
</fileMapper>
</fileMappers>
</transformationSet>
</transformationSets>
</configuration>
</plugin>
</plugins>
</build>
6) Create jmx files and run mvn verify
  • Now use JMeter to create your .jmx files and place them under the src/test/jmeter directory.
  • run: mvn verify to execute your performance tests
  • run: mvn verify pre-site to execute your performance tests and produce the test results in an HTML report.

If you want to see a real example of this click here. Now the only step I am leaving out is actually automating it which isn't the hard part luckily. All you need to do in Hudson is create a new job and execute the mvn goals mvn verify to get them automated in a CI environment.

Monday, March 10, 2008

Always specify a version for Maven2 plugins

For the past several months my team has been suffering a lot of downtime because our local builds fail and our Continuous Integration environment fails for what appears to be no reason. What makes these issues difficult to debug is what might fail on your machine, won't fail on mine ("works on my machine"). We eventually found the cause and that was we were not specifying a version for plugins in our maven2 poms. Pretty dumb, eh?

Wrong way

<plugin>
<groupid>org.apache.maven.plugins</groupid>
<artifactid>maven-surefire-plugin</artifactid>
</plugin>
Correct way
<plugin>
<groupid>org.apache.maven.plugins</groupid>
<artifactid>maven-surefire-plugin</artifactid>
<version>2.3</version>
</plugin>

This matters because if you leave out the version, maven2 defaults to SNAPSHOT. So what was happening is my project unknowingly became automatic beta testers for third party plugins.

So the moral of the story is always include a version (preferably a stable release version like 2.3 and not 2.3-SNAPSHOT) when defining your plugins. This will go a long ways in making your build more stable. And better yet, define those plugins in your parent pom using the <pluginManagement> section. This keeps your versions in one place so that when you do upgrade intentionally to a newer version you only have to modify it in one place for your entire project. For example:
<pluginmanagement>
<plugins>
<plugin>
<groupid>org.apache.maven.plugins</groupid>
<artifactid>maven-surefire-plugin</artifactid>
<version>2.3</version>
</plugin>
<plugins>
</plugins>


Specifically the maven-surefire-plugin and maven-war-plugin have cost us a lot already, so recently I updated all our poms to define a version for all the plugins we were using. And one reason I know it would fail for some, but not others, is some prefer to build in offline mode (-o) always which prevented them from seeing the problems.

Avoid installing 2 versions of Netbeans 6

Tonight I installed the latest OpenESB software (Build 20080303) and was rather confused as a lot has changed since I last used it back in October 2007. I found myself being rather unproductive so I decided to revert to an earlier version. So I downloaded Build 20080214 and installed it, but I did not uninstall the previous version. With the older version I repeatedly received exceptions on just about everything I clicked on. Many of my co-workers battle these issues everyday, but now I might have figured out why. For some reason, Netbeans just doesn't like being installed twice on the same system under the same folder (for me was /workspace/java/openesb). Under there I had 2 netbeans and 2 glassfish folders. Once I uninstalled everything and reinstalled the older version (Build 20080214) it worked perfectly.

So when you are receiving all those nasty exceptions every 5 seconds in Netbeans, make sure you only have one installed.

Thursday, March 6, 2008

Challenges using Icefaces

I would just like to take a quick moment and inform anyone about Icefaces, and the challenges I have faced using it. Take it from my current experience (version 1.6.1 and 1.6.2) and the words of an Icefaces developer that since Icefaces "is a stateful technology......providing enhanced features for the user....but require server-side resources to do so." That last statement has been our recent struggle.

Unfortunately, on my current project we don't have the luxury of lots of server-side resources. It's disappointing to say we have to share a single server that is already running IIS; then we have to run Jboss and MS SQL on the same machine. Obviously this is not by choice and doesn't look to change in the near future.

It might be Icefaces never intended their framework to run in such a limited environment. But that really doesn't help me now. It would be great if they published information about minimum hardware requirements, especially if they are aware of performance issues. Equally important I am sure we have strayed away from best practices in using Icefaces.

Please don't ask me why we choose to use Icefaces in the first place (I wasn't with the group at the time). But I assume it was a lack of knowledge of how Icefaces works, and we weren't aware of the environment limitations. Either way, we are facing difficult challenges that are very frustrating.

For example, we recently had severe problems with an editable table which was sortable (much like an excel spreadsheet). When users clicked twice on the header to sort, the data appeared to be sent twice and some how our data got corrupted. Consequently we had to disable the ability to sort.

Also, based on my understanding of Icefaces, it appears that all clients maintain a constant connection with the server. I am assuming this is what the Icefaces developer above was referring to when he says Icefaces is a stateful technology. For some reason, I feel this constant connection would cause challenges for 50-100+ concurrent clients with limited server-side resources.

Hope it helps someone.