Friday, October 1, 2010

How to Change Extjs PieChart Colors

When using Sencha's, or extjs, charting capability, most likely your going to want to change the default color scheme. I was faced with this issue today and it's not documented as well as you'd expect. I had to piece together a few articles and I'm still not 100% it's the right way as it's using an undocumented config option. But I wanted to document how I did get it to work to same others some time.

Sencha is very well documented and their examples are great. Here is the Pie Chart example we are going to be updating (using version 3.2.1). For those that don't know, extjs charts are based off of Yahoo's YUI charts which is also requires Flash. Not only is it pretty simple to create charts with extjs, but we already are using extjs so we decided to prototype some charts using it.

Here is the code that produces a simple basic PieChart:

new Ext.Panel({
    width: 400,
    height: 400,
    title: 'Pie Chart with Legend - Favorite Season',
    renderTo: 'container',
    items: {
        store: store,
        xtype: 'piechart',
        dataField: 'total',
        categoryField: 'season',
        extraStyle:
        {
            legend:
            {
                display: 'bottom',
                padding: 5,
                font:
                {
                    family: 'Tahoma',
                    size: 13
                }
            }
        }
    }
}); 

This produces the following PieChart.

To change the default colors provide a series config option:
new Ext.Panel({
    width: 400,
    height: 400,
    title: 'Pie Chart with Legend - Favorite Season',
    renderTo: 'container',
    items: {
        store: store,
        xtype: 'piechart',
        dataField: 'total',
        categoryField: 'season',
        series: [{
            style: {
                colors: ["#ff2400", "#94660e", "#00b8bf", "#edff9f"]
            }
        }],
        extraStyle:
        {
            legend:
            {
                display: 'bottom',
                padding: 5,
                font:
                {
                    family: 'Tahoma',
                    size: 13
                }
            }
        }
    }
});

The chart it produces may not look the most appealing but at least we figured out how to change the colors.
Pie Chart with New Colors


Again, I'm not certain this is the best way to accomplish this as the series config option isn't documented in 3.2.1. But by piecing together this PieChart question and this article, I was able to figure it out.

Thursday, August 19, 2010

Missing Office Equipment Prank

Here is Part 2 of my series where I reminisce about the good times I had at Williams. As I stated previously, I came across these printed off emails in our attic while moving to our new home. I'm so glad I printed them off. #goodtimes

Anyway, this was a prank Keith Stanek, Josh Guthman (Guthy), and myself pulled on Michael Brotherman (Are you Miking Me?). Before showing you the email though I need to set the stage. The year was 2003, and our company had just gone through 3 rounds of layoffs. Morale was pretty low. We worked on the 32nd floor of the 52 story BOK building in downtown Tulsa, Oklahoma. During fire drills we had to take the stairs down a floor. During one of the fire drills we noticed the entire 31st floor was void of humans, but a lot of very nice unused office furniture and white boards were still present. We joked around about repurposing some of the nicer equipment, but nothing ever come of it. Until one morning, we noticed Mike had a new chair. A very nice new chair. We gave him junk about it all day, but decided a prank would be better. So we had Josh Guthman (Guthy) write us up a believable email that Keith and I would spoof using the Facilities email address. Here it is:

From: Facilities Services-Tulsa
Sent: Thursday, March 13, 2003 7:39AM
Subject: Missing Office Equipment

During a recent audit for unused office equipment we discovered black reclining office chairs were missing from several of the floors in the BOK tower. Upon further investigation we discovered those chairs had been procured by current employees looking to upgrade from their existing chairs. While facilities appreciates your desire to utilize existing assets instead of requisitioning new ones, please remember that Williams is undergoing a cost savings campaign at this time and that all unused equipment needs to remain on its respective floor for proper accounting and redistribution either to the lessee or to an employee in need. If we have not already reacquired your chair, please return it to the floor from which it came by the end of business Friday, March 14.

Mike smelled it out, but noone ever confessed, and I'm pretty sure deep down, even though his gut was telling him it was a prank, he still didn't want to take the chance in the possibility that it might have been true. The best part about it was the morning we sent the email, Mike and I had a group breakfast meeting with one of the Senior Executives. As we were going down the elevator, he mentioned to me that he was going to bring it up during the meeting, and with a complete straight face I called his bluff and told him that I think he should, knowing that he wouldn't anyway; and if he did it would only make the prank that much better.

Mike returned the chair.

Wednesday, August 18, 2010

Real man of Genius

I usually keep this blog professional, but I'm in the process of moving and came across some emails that I just have to share and they are too long to post on facebook. All 3 emails where from my time at The Williams Company in Tulsa, OK where I had the privilege to work with some great friends. People like Keith Stanek, Michael Brotherman (Are you Miking me?), Jason Randall, Josh Guthman (Guthy), Erin Nylund, Becca Fairchild, Jennifer Brandt, and a host of others. I was so fortunate to work with these people and I'm not sure it could ever be duplicated. We had a ton of fun together. During off hours we played a lot of cards (Rook mainly), Starcraft I, and lots and lots of Halflife. Over time, I got to be pretty good at Halflife (HL). At some point we started playing 2 (Keith and Mike) on one and I still would usually win.

Now to the email. Wednesday August 20, 2003 at 9:45 AM, Michael Brotherman emailed Keith and me his Real man of Genius in my honor. It was done just like the Real Men Of Genius commercials. Here it is in it's original form:

Mr real man of genius...
We solute you, Mr. Total Conqueror in HL.
You who drops bombs that are not in the bathroom.
Who shoots your double barrel in our face.

Your eyes keep us from getting caught,
But your play keeps us from coming back.

Here's to you...Mr crossbow no miss,
Mr. Ray gun who makes it no fun.

We solute you.


My Favorite weapon was the crossbow. Oh good times. Stayed tuned. I've got 2 more coming.

Saturday, August 14, 2010

Hash tags in Commit Comments

I've been using Yammer, Twitter, and Facebook consistently for awhile now. One of the things I really like are hash tags, where yams or tweets include additional meta information in the comment such as #groovy, #hudson, or #maven. One of the main purposes of hash tags, is it allows others to subscribe to an area of interest verses subscribing to hundreds of individual people. Another purpose it serves, is determining interest value; sort of like a subject heading. Since hash tags are typically at the end of a tweet or yam, I usually read the end first before I commit to reading the whole yam or tweet. I don't follow a ton of people yet, but I do consume a lot of information in a day and in order to find the good I have to wade through the bad. Using hash tags aides in this process.

I also think hash tags could help in another area: commit comments. It's something important I've mentioned before, and I think hash tags can be useful in commit comments even if there aren't any tools yet to mash it up. A few days ago, our of habit I accidentally started including some high level hash tags in an svn commit comment and it occurred to me that it might be useful to others, if not myself in 6 months. If we find hash tags useful in yams and tweets, why not commit comments?

Including tokens in commit comments isn't new. In fact, we already include a Jira number in most of our commit comments and this allows us to view all the commits for a Jira issue. There is even a Jira plugin that allows you to perform actions by specifying hashes in commit comments. For example, if I want to resolve a Jira I can include #resolve in my commit comment, and Jira will automatically Resolve that Jira. And don't feel like you can't include the #resolve tag only if your using that jira plugin. I could see value in seeing a #resolve tag in the final commit of a Jira.

As an example, here is the exact commit comment I used that includes some hash tags for geoserver and installer.

"Jira: AC-4207. Got the filtered geowebcache.xml file correctly moved to the production and staging data directories. These files point to localhost with the correct geo port and stage geo port. Also commented out some fixpath.cmd lines to get the installer to work. Finally, I also change the ProcessPanel to not have a condition: changed to . This should allow us to be really selective in what we install and still allow the process panel to run, whereas before it wasn't running. #geoserver #installer"

Now the really cool part is if someone else in the near future notices an issue with geoserver in our installer, this comment will stick out more than a comment without those hashes.

Another cool thing that could be done is a team subscribing to certain hash tags in the svn commit emails. For example, someone responsible for peer reviewing all DAO changes could subscribe to a hash like #dao. Then when developers are modifying DAO's all they need to do is include the #dao tag.

I guess what I am saying is perhaps we could also benefit from putting extra hash tags in our commit comments. My brain has already been trained to read them so personally I think it's useful.

Sunday, July 25, 2010

Restoring a Clonezilla Image using VirtualBox

Ubuntu 10.04 has been out for a few months, and I'm still on 09.10. I have had some success in the past upgrading, but I still prefer doing fresh installs. I guess it comes from my windows days, when an occasional fresh install was good for the computer soul. However, this time I'm also starting a new project at work doing .net instead of java, and I really wanted the ability to "come back" to my old setup. Basically, I wanted to convert my host machine to a virtual one or what's called P2V (Physical to Virtual). I tried VMware Converter but didn't get very far. With some advice from several co-workers though, I did come up with a method that did work and it was fairly easy.

The basic steps are:

  • Use Clonezilla to Save a Disk Image to a external USB drive. This essentially clones my host machine, where I can restore it later. My hard drive is around 120GB, so I put the image on my 500GB external USB drive. This took about 1.5 hours.
  • Create a new virtual machine on another external USB drive. The nice thing about using Clonezilla, is for this step you can use VMware or VirtualBox. I used VirtualBox. Obviously, you can't create the virtual machine on your laptop because you don't have enough space. And you can't use the same USB drive because when restoring, Clonezilla needs it to be unmounted. So instead I used another external USB drive.
  • Start the new virtual machine and boot up Clonezilla to begin restoring the image. You need to change the mode because the default view doesn't work very well when restoring. So at the Clonezilla menu, choose "Other modes of Clonezilla live". Then choose "Clonezilla live (Safe graphic settings, vga=normal)".
  • When you get to the point where Clonezilla needs to point to the external USB drive that contains the Clonezilla image, remember to enable the USB drive in VirtualBox. To do this, go to the Devices menu option in your virtual machine and select USB Devices and check the appropriate USB drive. Restoring my 120GB image took about 24 hours so make sure to do it when you have time.
  • Once Clonezilla has finished restoring the image your ready to poweroff the virtual machine, remove the clonezilla CD, and restart.
I still had a few adjustments to make in order to get it to work. When I first started my virtual machine, it complained about not having PAE (Physical Address Extension) enabled. I enabled PAE in ubuntu about a month ago so I could use all 4GB of RAM. Fixing it was easy. Under your machines settings go to System and click on the Processor tab. Check the "Enable PAE/NX" checkbox and restart.

Once it booted up, it complained about my graphics configuration. I tried selecting "Reconfigure Graphics", but that didn't work. Instead I was able to get passed it by selecting "Run in low graphics for one session". This allowed me to finish booting where I installed the virtualbox guest additions which seemed to solve the graphics issue.

That is all there is to it. It was all rather easy. Now I can install Ubuntu 10.04 and have the ability to go back to my previous development environment. I could also see lots of different use cases for this. Combined with the ability to clone virtual machines, all your virtual needs are met.

Upgrading to Maven 3

I've been playing around with maven 3 lately on our legacy maven 2 multi-module project via mvnsh. Like advertised, maven 3 is backwards compatible with maven 2. In fact, most everything worked out of the box when switching to maven 3. In this post, I'm going to highlight the required and currently optional items I changed so you can start preparing to migrate your project to maven 3. But first, what's so special about maven 3 and why would you upgrade? Polyglot maven, mvnsh, and improved performance (50%-400% faster) are just a few. And since it's so easy to migrate to maven 3, you really don't have any excuses.

Currently, I build our project using maven 2.2.1. This article was tested with mvnsh 0.10 which includes maven 3.0-alpha-6. The current release of maven 3 is 3.0-beta-1, while maven 3.1 is due out Q1 of 2011.

Profiles.xml no longer supported
I haven't really figured out the reasoning, but it doesn't really matter; maven 3.0 no longer supports a profiles.xml. Instead you place your profiles in your ~/.m2/settings.xml. Some of our database processes and integration tests require properties from our profiles.xml. It was simple to solve by just moving my profiles to my settings.xml and everything worked.

Upgrade GMaven Plugin
We depend pretty heavily on the gmaven plugin for testing, simple groovy scripts, and some ant calls. In order to build some modules I had to upgrade gmaven. The current version we were using was 1.0-rc-3. Our projects built perfectly after changing it to org.codehaus.gmaven:gmaven-plugin:1.2.

${pom.version} changing to ${project.version}
Here maven 3 kindly warned me that uses of the maven property pom.version may no longer be supported in future versions and should be changed to project.version. My modules still built, but thought it was nice of maven to inform me of the potential change.

Version and Scope Issues
We had a few places where we needed to define a dependency version and another place where we shouldn't have defined a scope. Both instances prevented maven 3.0 from building our modules, but fixing them was easy. The first instance was we defined a version for a plugin in the pluginManagement section, but maven 3 required it also where it was used in the reporting plugin section. Not exactly sure about this one, ideally you would only define your plugin versions in the pluginManagement section but oh well.

We had some WAR projects using jetty. In the jetty plugin definition we had a dependency on geronimo and had defined a scope of provided. Maven 3 complained about it and since it's really not necessary, just removing it fixed the issue.

modelVersion
Maven 3.0 kept warning about using ${modelVersion} instead of ${project.modelVersion}. I was still able to build though, so my guess is the value for modelVersion, 4.0.0, most likely will change when maven 3.1 comes out.

Weird Surefire Output
This wasn't necessarily an issue with the surefire plugin, but I wanted to comment about it's output when tests failed as I thought it might have been a maven 3 issue. Below is a screenshot of the output when you have failed tests. At first I thought it was a maven 3 issue, but I built the same project using the same commands with maven 2.2.1 and got the same test failures. Hopefully, they can clean this type of thing up, because I could image lots of people getting confused.

Failed test output

That's essentially it. Happily, there really wasn't much required to change, which goes to show the great lengths the maven team has gone through to ensure backwards compatibility. Finally, here is a Compatibility Notes maven has provided on the subject of migrating maven 2 projects to maven 3.

Monday, July 19, 2010

My First Groovy DSL

It's no secret I'm a groovy homer. I love it. One of the things that makes using groovy so fun, is it's syntax. Being able to get the contents of a file by just saying new File("/home/james/test.log").text is refreshing compared to it's java counterpart. Another thing that makes groovy enjoyable is it's ability to support Domain Specific Languages (DSL). MarkupBuilder is a great example. With Groovy, you can create simple or very complex DSLs for your purposes. To my knowledge there are a few ways you can create your own DSL: extending BuilderSupport or using methodMissing/propertyMissing. In my opinion, extending BuilderSupport is more involved while methodMissing/propertyMissing is kind of the poor man's way of creating a DSL.

Up to this point though, I've never actually came across a good use case for creating a DSL until this past week. We have a large set of automated tests that run against our REST Services. Since our application is now multi-tenant, all of our tests need a valid organization (tenant). In our case, an organization contains multiple roles and locations. Each test has different requirements on the types of organizations it needs. Some might need 2 unique organizations, while another might need an organization with at least 2 roles and 2 locations. It was this use case that I thought a groovy DSL would fit perfectly.

My end goal was to have something like this:

def orgs = OrganizationService.getOrganizations().withRoles().withLocations()

This would return a list of organizations that had at least 1 role and 1 location. The nice thing about this DSL is it's scalable. Meaning, if we add new lists of information to an organization, we won't have to update our class. Also, an important feature, is the method name Roles and Locations correlate to the JSON named arrays of the organization. So my JSON looks something like this:

{"organizations": {"name": "James", "roles": ["R1", "R2"], "locations": ["Tulsa", "Omaha"]}}

When writing my DSL I decided to go the poor man's way and use the methodMissing approach combined with the @Delegate annotation. Here it is:

import net.sf.json.JSONArray

class OrganizationFilterArray {
    @Delegate private JSONArray array
    
    OrganizationFilterArray(array) {
        this.array = array
    }
      
    def methodMissing(String name, args) {        
        if (name.startsWith("with")) {
            def length = (args.length == 0) ? 1 : args[0]
            def arrayName = name[4..5].toLowerCase() + name[6..-1]
            
            return filterByLength(arrayName, length)
        } else {
            throw new MissingMethodException(name, this.class, args)
        }
    }
    
    private filterByLength(listName, length) {        
        def filteredArray = array.findAll {
            it."$listName"?.size() >= length
        }
        
        return new OrganizationFilterArray(filteredArray)
    }
}

I could have just as easily extended JSONArray since it's not final, but I was following the @Delegate guide initially and just thought it was an interesting alternative. The big key here is how I used methodMissing to support an infinite amount of possibilities with how to filter an organization. Everything else I think is pretty self explanatory. When it comes across a method that is missing, withRoles(), it calls my methodMissing method. From there I filter out all the organizations that don't fit the criteria. Eventually, this class could be refactored to support more than just the size of an array. Note, I did have to upgrade the gmaven plugin version to 1.0 to get it work in our maven project.

I knew from the beginning I wasn't going to use BuilderSupport. It did take me some time to figure out how I was going to support filtered (getOrganizations().withRoles()) and non-filtered versions (getOrganizations()). That is when I decided to extend List or JSONArray, as both method calls had to return my custom List/JSONArray. Overall, I'm pretty happy with the outcome and how long it took me. It was pretty trivial and very fun thanks to groovy.

Wednesday, July 14, 2010

Tip Debugging External Java Dependencies

Ever spent time debugging 3rd party java libraries? Decompiling is usually the first step. Attempting to walk through the code can be tedious but it's usually the first line of defense. But what if you want to deploy a slightly modified version? In the past, I've checked out the project and built it with my modifications. Since most open source projects don't support "virgin builds", this has a success rate of about 10%. Fortunately, there is a better way. I'm just disappointed I didn't think of it.

In our project we deploy a wiki that is based on JSPWiki using maven overlays. In the version we are using, there isn't any support for being able to configure the wiki files directory outside of a properties file in the WAR. In order to point JSPWiki to a different directory, you would basically have to unzip the WAR, update the file, and then zip the WAR back together (#fail). So, someone on our team discovered we could basically override this behaviour by providing our own implementation of the same class.

To be more specific, the class under question is com.ecyrd.jspwiki.PropertyReader. It's included in the JSPWiki.jar file under /WEB-INF/lib. It's default behaviour is not suitable for our needs, so we get an original copy of PropertyReader.java, and place it under our maven projects /src/main/java directory under the same package of com.ecyrd.jspwiki. Once the projects builds, we now have our version of PropertyReader.class under /WEB-INF/classes, which is important because the ClassLoader will first look under /WEB-INF/classes first before looking in /WEB-INF/lib. This means our class is used instead of the one provided by JSPWiki in /WEB-INF/lib/JSPWiki.jar.

Now I know what your thinking: that's a horrible idea James. And for the most part I agree, but it's not my fault this ability doesn't already exist in JSPWiki. So if you want to keep your conscience clean, go ahead and continue unpacking and repacking that WAR. I'll be happy getting important things done. Obviously, practicing this is the exception and not the rule. And one should provide the patch as an improvement back to the 3rd party for all to enjoy. And before you start asking yourself why you can't just extend the real PropertyReader and override the necessary methods, which I agree would be more ideal, it's not possible because you'd basically be extending yourself since the modified class is the first class in the classpath.

This technique has actually helped me twice debug environment specific issues. It'd saved me a huge amount of time not having to build an external library. In fact, if you check out the exact version, you could even perform remote debugging with breakpoints.

So next time you need to debug an external 3rd party library, consider using this technique before attempting to build it.

Tuesday, July 13, 2010

Avatar Maven

Today I gave a quick presentation to some coworkers about maven. It's a broad topic, so I kept it fairly limited. Most of my audience was very familiar with maven, so I tried not boring them with stuff they already knew. I tried making it a little engaging by comparing the Avatar, master of all 4 elements, to Maven, master of the build (it's a stretch I know). It's a quick presentation (15 slides) providing some helpful maven tips, what's coming in maven 3, and mvnsh. Hope you like it.

Friday, July 9, 2010

Sharing Resources in Maven

Today I needed to figure out the best way to share resources across multiple maven modules. We have previously done it 2 different ways, neither of which I thought were very good. The first way was using a relative path to reach across to the modules resource directory (usually not a good practice in maven). It went something like this:


    
        ../module1/src/main/resources
    


The second way was using the infamous maven assembly plugin. I typically avoid the assembly plugin like I avoid writing Assembly. Plus I prefer avoiding 100 extra lines of XML on something so trivial. Luckily, the Sonatype guys apparently knew this and have come up with a more efficient way of sharing resources using the maven-remote-resources-plugin. It has the advantages of requiring a lot less XML lifting and it's nicely integrated into the maven lifecycle. I did run into one small issue trying to get it to work. By default it only copies **/*.txt files from src/main/resources. For several minutes, I couldn't figure how why it wasn't working until I added an includes for **/*.xml. Then it worked perfectly. Here is the end result:

Creating a resource bundle
Add the following to your POM which is going to create the resource bundle.
      
    maven-remote-resources-plugin
    1.1
    
        
            
                bundle
            
            
                
                    **/*.xml
                
            
        
    


You now should see the following message in your mvn output while running mvn clean install.

[remote-resources:bundle {execution: default}]

This produces a /target/classes/META-INF/maven/remote-resources.xml file which contains references to the resource files. For example,

    test.xml

Consuming Resource Bundle
Add the following to the POM which needs to consume the new resource bundle.
      
    maven-remote-resources-plugin
    1.1
    
        
            
                process
            
            
                
                    com.lorenzen:lorenzen-core:${pom.version}
                
            
        
    


You now should see the following message in your mvn output while running mvn clean install.

[remote-resources:process {execution: default}]

You should now be able to look into your second modules /target/classes directory and see test.xml.

Thursday, July 1, 2010

RSS, Lucene, and REST

Sorry for the horrible title. I struggled trying to come up with a worthy title, but after a few minutes I decided to not let perfection get in the way of good.

My team has recently worked on a new feature I am pretty excited about: adding support for RSS/Atom in our application. I know your thinking so what. It's not really the what I am excited about but the how. What I'm really excited about was how the story was defined and implemented.

Approach
We had the simple requirement from a newer customer to provide an RSS feed for newly created items. This actually wasn't the first time for this requirement. We prototyped a similar capability a long time ago using OpenESB and the RSS BC, but for multiple reasons it just didn't work out.

So our first decision had to answer: how we were going to implement it......again, but better. Before the sprint began, a few of us got together and hashed out a potential solution: how about we use the Search REST Service, which is backed by Lucene, to support Advanced searches and return RSS?

Why does this excite me so much? To understand that I need to explain our application at a high level. It's a completely javascript-based application using ExtJS (now sencha), backed by REST Services using Jersey. Consequently, we have a lot of REST Services. Right now those REST Services support returning XML or JSON using a custom Response Builder we have created internally.

I'm excited because this single user story could have a huge improvement on the entire system:

  1. If we modified the Search Service to return RSS, then all our REST Services could support RSS.
  2. The REST Service would now support Advanced searches. Previously, it only really supported basic keyword searches.
  3. Any search they perform could now be subscribe to via RSS.
Implementation
I'm not going to go into every detail on how it was done. I wasn't even actually the one who implemented it (see Matt White. He did a fantastic job.). We did have one major hurdle we had to overcome, and that was how to index items to enable advanced searches like Status=New.

Previously this wasn't possible given how we were indexing our items. We were basically indexing the item by building up a large String containing all the item information like the following:
import org.apache.lucene.document.Document
import org.apache.lucene.document.Field

def Document createDocument(item) {
    Document d = new Document()
    
    doc.add(new Field("content",
        getContent(item),
        Field.Store.NO,
        Field.Index.ANALYZED))
        
    return d
}

def String getContent(item) {
    def s = new StringBuilder()
    
    s.append(item.getTitle()).append(" ")
    s.append(item.getStatus()).append(" ")
    s.append(item.getPriority()).append(" ")
    s.append(item.getDescription()).append(" ")
    
    return s.toString()
}

The problem with this is performing a search for "New" would have returned any item with a status of New as well as any items that contained the word New. The solution was to just add another Field to the Document.
doc.add(new Field("Status",
    item.getStatus(),
    Field.Store.NO,
    Field.Index.NOT_ANALYZED));

Now the Search Service could support advanced searches like: Status:"New". You should put the value in quotes in case the value contains spaces (ie Status:"In Progress"). And since Lucene is so powerful, it also means the follow search would work: Status:"New" AND Priority:"High" AND "Hurricane". Now users have the freedom to subscribe to a near limitless amount of RSS feeds based on Advanced Searches.

Start to Finish
I think there were several reasons why this story was a success in my eyes. Most importantly where the two really smart co-workers who worked on it: Matt White and Chuck Hinson. All three of us knew of this user story ahead of time and we were able to discuss it technically days before backlog selection. This allowed us to brainstorm some ideas. Once we narrowed it down, we spent some more time separately looking into the code to find out the level of difficulty and if Advanced Searches like Status:New would be possible. Overall, together I'd say we spent 3-4 hours doing the preliminary work. Doing that preliminary work I think really enabled us to give a proper WAG for the story.

I really can't speak for how the development went (I was at Disney World for 10 days with the family), but I was really impressed with the tests Matt wrote. He wrote a number of unit tests making sure advanced searches worked and basic searches still worked. On top of that, he wrote an overall functional test using HttpBuilder executing the REST Service just as our javascript client would.

Finally, once the main work was finished, we uploaded a diff file to our internal instance of Review Board. From there I was able to perform a peer review where we found a minor bug in the changes.

Summary
I am sure it's not an original idea, but I thought it was a fun User Story that hopefully will provide a lot of value beyond what was originally estimated. Ideally, this might help others who are in similar situations.

Friday, May 14, 2010

How to create a release without the maven2 release plugin

One of the most referenced articles I have written is "How to create a release using the maven release plugin". But what if you can't get the maven release plugin to work with your project? Perhaps like our team, you've got a legacy maven2 multi-module project that's been nigh impossible to use with the release plugin. Our project has a mix of WAR modules combined with some Flex modules. I believe our last issue was some googlecode flex mojo wasn't working with the release plugin. Consequently, for the past year or so, we've been manually creating our releases. This actually hasn't been that much of a pain since we really only do it once a sprint at the end. Combined with my favorite perl script it doesn't really take that long. However, it does have the disadvantage of requiring some knowledge of what and now to do it. Ideally, it would be a job in Hudson, anyone on the team could run as many times as they like.

In an effort to try and automate as much as possible, I decided to try and automate releasing our legacy multi-module project using bash. This has several benefits: create a release faster, done consistently each time, turn-key solution anyone on the team can run that doesn't require stale documentation on how to do it.

It took my several hours to essentially duplicate the maven release plugin process. Thanks to our new intern Scott Rogers and linux master Ron Alleva, I was eventually able to get it finished. It's my first "official" bash script so pardon the mess. If you've never attempted to automate your release project, first consider reading my article on How to effectively use SNAPSHOT.

Here is the script available as a gist on github: project-release.sh. Here is what it does:

  1. Copies the current working branch (i.e. trunk) into another branch. It uses the pom.xml value to get the current working branch.
  2. Updates all the pom.xml version sections of the current working branch
  3. Commits the pom.xml changes
  4. Checks out the release branch
  5. Updates all the pom.xml version sections of the release branch (basically stripping off -SNAPSHOT)
  6. Commits the pom.xml changes
To run this script all you have to do is run: project-release.sh 2 false. The first parameter (2) is the increment position that the current working branch needs to be next. For example, if trunk was on 1.2.0-SNAPSHOT and the position passed in was 2, then trunk gets updated to 1.3.0-SNAPSHOT. If the position was 3 then trunk would be updated to 1.2.1-SNAPSHOT. The second parameter is used when testing. It's like the dryRun option in the maven release plugin. When set to true, nothing gets copied or committed.

A few notes about the script:
  • The base branch URL is hardcoded but could easily be passed in as another parameter or placed and read from some external file.
  • It uses the cmd xpath to extract out the pom version, project name, and scm url. I'm on ubuntu 9.10 and according to synaptic I have libxml-xpath-perl version 1.13-6 installed.
  • It doesn't run any maven commands like mvn deploy. Other jobs in CI can accomplish that or you can easily add them into the script.
  • To run from Hudson:
    • Create a New Job
    • In the Build section Add a Execute Shell Step
    • Update the Command text with: $WORKSPACE/trunk/project-release.sh 2 false
Overall, I'm pretty happy with the outcome. And as we start to perform more releases among multiple projects I think it's going to really come in handy. I think ideally you should try and release your project using the maven release plugin, but if that isn't possible then don't give up. Just clone.

Wednesday, April 14, 2010

Thoughts on Fowler's Continuous Integration

It's always kind of nice to go back to the basics. I've always enjoyed re-reading basic programming practices and patterns. I tend to forget the things I don't use on a daily basis. That's why I enjoyed reading Martin Fowler's article on Continuous Integration. The article says the last significant update occurred May 2006, but it's withstood the test of time; much like The Cathedral and the Bazaar. But if you don't have the time to read this rather long article, here are a few of the favorites I pulled out as I read it over the course of a few days. Before that, let me explain a little bit of my experience.

At my first programming job we didn't really have a VCS (Version Control System) like CVS or SVN nor did we have a CI (Continuous Integration) server; we really didn't know any better. We essentially did all of our work straight off a shared drive (I know). But that was before I came to Gestalt, now Accenture, 5 years ago. Since then I've been exposed to CVS-->SVN-->Git, Ant-->Maven 1-->Maven 2,  CruiseControl-->Hudson, and finally TDD (Test Driven Development). Being exposed to all of this has been a huge improvement to my career. More importantly it's been a huge benefit to how I write software and the tools our teams use such as VCS and CI. I can't image developing software without them.

Here are some points out of Continuous Integration that I would think applies to any project java or not:

Work does not stop on your commit

"However my commit doesn't finish my work. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done."
So true. Just because you ran some tests locally or manually tested it, doesn't mean your done just because you checked in your changes. You've got to monitor CI to ensure it passes. This has been a topic of discussion on my team lately as we've come in in the morning with a few broken builds. Solution: check in often during the day, but don't checkin and leave and not verify CI passed. Either stay late, sign in at home, come in early, or checkin first thing the next day.

Simple checkout build rule
"The basic rule of thumb is that you should be able to walk up to the project with a virgin machine, do a checkout, and be able to fully build the system."
This is a very important point. Not only will this improve the productivity of new team members but also reduce the amount of time it takes to create new CI jobs. This rule is even more important for open source projects. I've had several issues in the past trying to patch open source projects and wasted several hours just trying to build their code. If you want people do contribute to your project, make it easy for them to build your software. For example, I've been wanting to write a simple Docky plugin for Hudson, but have ran into several issues (New Plugin and Missing Package) trying to build the Do project. Have those questions really been Answered? NO! What have I done about it? I haven't retried it since. To restate Mr. Fowler, I should be able to easily checkout your code and at a minimum build it. As an added bonus it'd be nice to run unit tests as well.

Automate everything
"However like most tasks in this part of software development it can be automated - and as a result should be automated. Asking people to type in strange commands or clicking through dialog boxes is a waste of time and a breeding ground for mistakes."
If your just getting started with CI this can often be difficult. But your long term goal should be to automate everything. This includes creating/destroying your database, deploying/undeploying your application, automating your tests, copying configuration files around. I'd even go as far as to say automate the creation of the development environment: installing maven and java for example. Again this not only speeds up new team members productivity but also those virgin CI servers.

Two great examples of this. Before we had a internal CI team, our team was manually setting up multiple CI servers with maven, java, jboss, and a database. These new servers couldn't be used until all of this stuff was manually configured. Then our internal CI team helped automate some of this stuff and we can very easily use hudson to point jobs at different servers within minutes. Something that wasn't really possible before without manually intervention. And all they really did was call a few simple ant copy commands from maven.

Another good example of this comes back from our old CruiseControl and Ant days. At one point in our project we were constantly breaking a major piece of functionality and one of the main reasons was it was very difficult to test. It was a distributed test with multiple servers communicating with multiple clients via SIP. The build process called for building the latest code, stopping 2 instances of weblogic (1 local, 1 remote), starting weblogic, deploying the latest code, waiting for weblogic to finish starting (not easy mind you), and then running our automated test. This was rather huge undertaking, but given a few weeks we had the core of it automated. It was amazing. I never thought it would have been possible, but it was and anytime that test failed we knew immediately we broke something. We were able to accomplish the difficult parts by calling remote bash scripts via ssh from ant.

Imperfect Tests
"Imperfect tests, run frequently, are much better than perfect tests that are never written at all. "
Not exactly sure what he means by imperfect tests, but this is one place I currently disagree. It takes practice to write good tests. Once you refactor and maintain tests over a long period of time you start getting pretty good at writing tests that require less refactoring. One of the things that is killing the productivity of our team right now is what I call "cronically failing tests" or tests that randomly fail for no reason. You check the change log and nothing changed in the build which means it shouldn't have failed. You rebuild the job and it passes. Here lately this can be attributed to date comparison asserts and issues with timing. For example, the test passes when the database is local, but fails when the database is remote. Or you get different results when the time on the database server is not sync'd. The end result is this produces false negatives that really hurt the validity of CI; developers just start ignoring all failures. Once you've identified one of these cronically failing tests, it's important the author of that test, or the person who last modified it, refactor the test to be flexible. If the author doesn't do it, they will continue producing these types of imperfect tests.

Good Build Characteristics
He had several comments I would wrap into good general build characteristics. Two of which are fast builds and accessible artifacts. As a general rule he suggests keeping build times to around 10 minutes. Which is usually achievable for compile/unit test jobs, but database related and above can usually take longer. My general guideline is try to keep those longer running builds to around 30 minutes, but definitely no longer than an hour. Unfortunately right now, we have several of those 40-55 minute builds I'd like to trim down some. It'd be great to see a hudson plugin that could show me how long each part of my build took.

With a combination of our company maven repository and hudson, it's pretty easy to make our artifacts accessible. This is really huge as sometimes I don't waste time building certain things that take forever to build; I'll just download them from hudson. I know a lot of times our DBA will just download the zip he wants to test and prevents him from updating his source and building, etc. Another related topic is we have several nightly jobs that deploy the latest code to jboss/websphere that can be used the next day by everyone to see/test/verify the latest code.

Rollback Deployment
"If you deploy into production one extra automated capability you should consider is automated rollback."
This was a pretty new concept for me and one we don't necessarily follow. I've heard of Continuous Deployment, but never really heard about a rollback feature. I know we've accidentally benefited from a build failing and not deploying the latest nightly code thus allowing us to perform diff-debugging to track down a bug. We had 2 servers that built the night before, 1 passed and the other failed so it contained the previous days build. A bug was detected on the passing server and we were unable to reproduce on the outdated server. This told us it had been introduced in the past 24 hours. This isn't exactly rolling back but maybe the morale of the story is keeping a server around that is behind a day.

Summary
There is a lot of good general information in this article and I would encourage anyone to take the time to read it. I only highlighted the things that really stuck out at me; there were a lot more useful things I passed mentioning.

Thursday, April 8, 2010

Possible solution for WebSphere issue NMSV0310E

It's been quite a ride porting our EAR from JBoss 4.2.1 to WebSphere 6.1. Installation issues on CentOS and Precompiling JSPs were just a few of the issues we encountered. We are still producing 2 different EARs, but both are based on identical WARs.

The latest issue had to do with receiving a WebSphere exception from a Init Servlet that started a separate Thread. This Thread contained code that indirectly performed a JNDI lookup to get a Datasource. Unlike JBoss, WebSphere apparently doesn't like Unmanaged Threads performing JNDI lookups.

Here is the WebSphere exception:

"javaURLContex E NMSV0310E: A JNDI operation on a "java:" name cannot be completed because the server runtime is not able to associate the operation's thread with any J2EE application component. This condition can occur when the JNDI client using the "java:" name is not executed on the thread of a server application request. Make sure that a J2EE application does not execute JNDI operations on "java:" names within static code blocks or in threads created by that J2EE application. Such code does not necessarily run on the thread of a server application request and therefore is not supported by JNDI operations on "java:" names"

This seems to be a rather common issue in WebSphere with multiple possible solutions. I guess ideally you should try and configure a container-managed Thread using CommonJ (see section Scheduling and thread pooling). Unfortunately, the configuration is different for JBoss.

Fortunately, I think I stumbled upon another solution. While reviewing the WAR Spring applicationContext.xml file that configured our Datasource I noticed a property called lookupOnStartup. It was set to false and when setting it to true, the exception went away.

<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
    <property name="jndiName" value="java:comp/env/jdbc/MY-DS"/>
    <property name="lookupOnStartup" value="true"/>
    <property name="cache" value="true"/>
    <property name="proxyInterface" value="javax.sql.DataSource"/>
</bean>
When setting it back to false the exception appeared again; setting it back to true the exception was gone. Unfortunately, I can only speculate as to why this solved the issue. My guess is, when lookupOnStartup was false, the first attempt to get the Datasource was from a separate Thread which WebSphere didn't like. However, when setting lookupOnStartup to true, the first time the Datasource was retrieved was by a container-managed Thread and once my separate Thread needed the Datasource it was already looked up and cached. According to the javadocs for JndiObjectFactoryBean.setLookupOnStartup the default is true, so it can't be that bad, right? I can't think of a reason why someone would want this delayed.

If you ever run into this issue, you might consider setting lookupOnStartup to true and see if that fixes your issue.

Wednesday, March 17, 2010

How to Speed up Maven

How many times do you invoke maven in a day? How much time could you save if you shaved 15 seconds off each execution? That doesn't really sound like a lot but when executed a 100 times a day it adds up quickly; and 15 seconds is probably conservative. Now stop being selfish and think collectively as team how often maven is executed. 15 seconds for 50-100 builds a day across a team of 10-15 programmers adds up even quicker. This was my experience experimenting with the maven-cli-plugin with a simple sub-module running mvn clean install with unit tests.

I had never actually heard of maven providing this ability. I had seen and used it for other technologies like grails and scala, but I never considered if maven did as well. It wasn't until James Strachen recently tweeted about attempting to use a new feature provided by Sonatype called mvnsh: "a CLI Interface that enables you to speed up your builds because project information and Maven plugins are loaded into a single, always-ready JVM instance". From the FAQ, "The Maven engine is only started up (bootstrapped) one time and then held in a "ready" state waiting for user input of Maven or other shell command". Since I spend most of my day basically running maven, I was very intrigued.

Unfortunately, it appears mvnsh only supports maven 3 projects and our projects are still using maven 2. Even though I've read maven 3 is backwards compatible, I'm still waiting for a more official stable release (maven 3 is currently at 3.0-alpha-7). On the bright side, it appears mvnsh was set to improve on the maven-cli-plugin that supports maven 2 projects.

So I just wanted to share my experience getting the maven-cli-plugin set up and get the word out to fellow maven users to improve their productivity. Overall it seems to work as expected and does improve my local build times. I'm on Ubuntu 9.10, maven 2.0.10, and JDK 1.5. As I mentioned in the beginning, using cli:execute-phase decreased my build times approximately 15 seconds. The biggest disadvantage so far is I don't have access to my alias's, which I depend on heavily for just about everything I do with maven.

Getting started with the maven-cli-plugin
I basically followed the Common Setup instructions on the github wiki User Guide.

I added the pluginGroup to my settings.xml file:

<pluginGroups>
    <pluginGroup>org.twdata.maven</pluginGroup>
</pluginGroups>
I then added a new profile to my settings.xml file:
<profile>
    <id>cli</id>
    <pluginRepositories>
        <pluginRepository>
            <id>twdata-m2-repository</id>
            <name>twdata.org Maven 2 Repository</name>
            <url>http://twdata-m2-repository.googlecode.com/svn/</url>
        </pluginRepository>
    </pluginRepositories>
</profile>
And finally I enabled the new profile in settings.xml:
<activeProfiles>
    <activeProfile>cli</activeProfile>
</activeProfiles>
Using the maven-cli-plugin
The maven-cli-plugin basically supports 2 goals: execute and execute-phase. Use execute when you want to run goals and use execute-phase when you want to run phases. It's unfortunate the plugin forces you to pre-commit to one or the other; hopefully mvnsh has improved this. I typically run clean install which are phases so I ran: mvn cli:execute-phase. This puts you in an interactive shell where you can run clean install. Once finished you continue to stay in the shell allowing you to repeatedly execute mvn phases without having to bootstrap maven each time.

Summary
Since I spend a majority of my time executing maven and this simple experiment decreased my build times, I plan on using the cli plugin for every day use when I don't want to use my alias's. Also, this plugin should work on windows as well so don't think you are left out. Let me know how your mileage varies. Do you have any other tips that improve maven execution times?

On a side note, I'm sure the django/python crowd cringe at how much time java developers waste compiling and deploying. Execution times are something I believe they don't have to worry about.

Monday, March 1, 2010

Free Maven Repository Hosting for Open Source projects by Sonatype

I'm very excited to see Sonatype support maven repositories for Open Source projects that use maven. In all honesty, they didn't have to do this. Unfortunately, the java.net repo was harming the maven reputation. I've had direct experience using the java.net maven repo and can say it was an unpleasant experience. When we open sourced our 4 JBI (Java Business Integration) Components their home existed on java.net and we used the their maven repo. It was difficult to upload anything and it seemed to be constantly down.

Before this announcement, open source java projects using maven didn't really have an option as to where they could publish their artifacts. To my knowledge neither Google Code or Sourceforge offered this capability. Apache and Codehaus did and obviously you still have the Maven Central (http://repo1.maven.org/maven2/), but I never went through the process of what it took to use them. Now it doesn't matter where your project is hosted. Hopefully the next thing to come is free Continuous Integration services in the cloud using Hudson. I think this is the next step for project hosting sites like Google Code and github.

Providing free maven repos I think has a lot of benefits and not just for maven users. For example, this should benefit all dependency management tools that are built on top of maven repos. I'm not 100% sure tools like Ivy, Grape, Gradle, and Buildr use maven repo's, but my guess is they do and this will benefit those users. Another benefit is being able to standardize on maven repositories, hopefully preventing users from searching where they can find your artifacts. I've wasted a lot of time in the past trying to find valid repositories where I could find artifacts for a project I was wanting to use.

I'm also very impressed with the features Sonatype is providing. Not only will they support release artifacts but SNAPSHOT's as well which could consume a lot of space. You'll also be able to easily sync with Central. Finally, they will support a staging repo in order to test things out before officially releasing.

So go sign up and thanks Sonatype. Also, read this post to learn how to release your project using the maven release plugin.

Wednesday, February 24, 2010

Commit Comments: A Conversation with your Future Self

I frequently find myself searching subversion commit logs trying to find hints as to why certain "features/bugs" where introduced and why. Usually this results in wasting several hours and almost every time I get frustrated with a lack of dialogue in commit comments. I can't stress enough the importance of this often over looked feature when committing changes. A lot of developers look at it like a 30 second burden that's preventing them from taking lunch earlier. What they really should be doing is pretending it's a conversation with their future self. Six months from now you're most likely going to be seeing those comments, or lack of, wondering why you made that change. It's in that moment I'd rather have a very descriptive summary of what changed and why verses diff'in every version known to man.

I'm not asking for a Kevin Costner script, just be a little more specific. On average, my comments are 1-3 sentences. For the important changes, I've used paragraphs before. Comments with fewer than 5 words drive me nuts, and I have seen a lot of them.

Ever seen a movie where campers make a trail using something like marshmallows so they can find their way back? Think of your commit comments like marshmallows helping you unravel the mystery of an issue and start developing the habit of providing more descriptive commit comments.

Other Useful Tools
One of the tools I have grown fond of is the Jira subversion plugin. Our team uses Jira as our issue tracking system. Nothing gets committed without a Jira number. This is enforced using a subversion pre-commit hook, so every developer has to start their comment with a Jira number. Then in Jira, there is a Subversion Commits link at the bottom to where anyone can view the files changes and their comments.

Using the subversion pre-commit hook has another added bonus, which is using the Hudson Jira Plugin. After a successful build, hudson will extract the Jira number(s) from the commit comments and add a comment to the Jira stating it was integrated at a specific build # and includes a link.

Tuesday, February 16, 2010

Rapid REST Development with maven jetty plugin

Ever wonder how much time is wasted by Java developers rebuilding and redeploying web applications? For me alone I can't imagine it. In fact, I'd be embarrassed if Google Buzz made it public knowledge without my consent. Two years ago I wrote an article "No more J2EE Apps" and I received a lot of great feedback. Let me first say that I think, IMHO, Java Developers are at a disadvantage when it comes to rapidly developing web applications with a static language. Developers using python, php, rails, or grails really don't even have to spend a second trying to solve this problem. On the other hand, Java Developers have to figure out how to best accomplish this, and every situation seems to be different: JBoss, Weblogic, Eclipse, Idea, Netbeans, Jetty, JRebel. Of all the solutions I think JRebel provides the best chance for success that solves any environment no matter the web container or developers IDE of choice.

I haven't really done much hardcore java development in awhile, but in order to improve my team's productivity, I am going to be exploring best practices the next couple of months concerning this area. First up, I am going to explain how I got the maven jetty plugin to work with our REST Services WAR and the steps necessary to redeploy changes. The nice thing about jetty is it's easy to use from maven and we could use it in our CI environment to possibly reduce our build times and provide quicker feedback. The downside is each change requires jetty to hotdeploy the new WAR. End the end I think the best solution will be a combination of JBoss+JRebel. But I won't get to for awhile.

Maven Jetty Plugin
My first prototype uses the maven-jetty-plugin version 6. The application we are testing is a WAR containing REST Services built with Jersey (JAX-RS). Here is a good posting by my co-worker Jeff Black "Jersey...Jetty and Maven style!". This example didn't work for me because for some insane reason the init-param, com.sun.ws.rest.config.property.packages, does not work in WebSphere. So I had to do some slight modifications to get it to work without that being declared. My pom.xml and jetty.xml files are below. Most "normal" non-Jersey applications don't need all of this, but it was necessary to get our legacy WAR working with jetty.

Here are the steps involved to start jetty and redeploy changes:

  1. mvn install jetty:run - this will first build the WAR and then start jetty while also deploying the WAR. Running install was necessary because I reference the exploded WAR directory under target in the jetty configuration.
  2. Make changes to source code
  3. Run "mvn compile war:exploded" in a separate terminal to compile your changes and copy the new class files to the location Jersey expects to find them. Which in my case is /target/myapp/WEB-INF/classes
  4. Click back to the terminal running jetty and hit the ENTER key. This causes jetty to reload the WAR. This is because by default I set the scan interval to 0 and jetty.reload to manual, so I can batch up multiple changes before reloading.
Overall I am happy with the results so far. Previously, it took 1-2 minutes and sometimes more to rebuild the war and hotdeploy to JBoss. Using the jetty plugin this now takes around 30 seconds. Again, I think this could be further improved by using JRebel.

Tips
I did have to update my MAVEN_OPTS environment variable to increase Java's PermGenSpace since jetty reloads the WAR each time and you'll quickly run out of memory. This was something I was already doing in JBoss. Here is what it is set to:

export MAVEN_OPTS="-Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m"

Sample Files
Here is my pom.xml
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<version>6.1.22</version>
<configuration>
<jettyConfig>${project.build.testOutputDirectory}/jetty.xml</jettyConfig>
<scanIntervalSeconds>${jetty.scan.sec}</scanIntervalSeconds>
<useTestClasspath>true</useTestClasspath>
<webAppConfig>
<baseResource implementation="org.mortbay.resource.ResourceCollection">
<resourcesAsCSV>${project.build.directory}/myapp</resourcesAsCSV>
</baseResource>
</webAppConfig>
<systemProperties>
<systemProperty>
<name>jetty.port</name>
<value>${jetty.port}</value>
</systemProperty>
</systemProperties>
<systemProperties>
<systemProperty>
<name>jetty.reload</name>
<value>${jetty.reload}</value>
</systemProperty>
</systemProperties>
</configuration>
<dependencies>
<dependency>
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>net.sourceforge.jtds</groupId>
<artifactId>jtds</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>com.oracle.jdbc</groupId>
<artifactId>ojdbc14</artifactId>
<version>10.2.0</version>
</dependency>
</dependencies>
</plugin>
.....
<properties>
<jetty.port>8080</jetty.port>
<jetty.scan.sec>10</jetty.scan.sec>
<jetty.reload>manual</jetty.reload>
</properties>

Here is my jetty.xml located under /src/test/resources used to define the Datasource.
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">

<Configure id="Server" class="org.mortbay.jetty.Server">

<New id="MYAPP-DS" class="org.mortbay.jetty.plus.naming.Resource">
<Arg>jdbc/MYAPP-DS</Arg>
<Arg>
<New class="org.apache.commons.dbcp.BasicDataSource">
<Set name="driverClassName">oracle.jdbc.driver.OracleDriver</Set>
<Set name="url">jdbc:oracle:thin:@localhost:1521:XE</Set>
<Set name="username">user</Set>
<Set name="password">password</Set>
</New>
</Arg>
</New>

</Configure>

Thursday, February 4, 2010

Getting Started with Extjs

I was recently approached by an internal co-worker within Accenture about our teams experience with Extjs. Out of 150,000+ employees in Accenture he was able to find us through Yammer, a free private twitter-like service for companies. Originally I was just going to respond to him via email after our phone conversation, but that would only benefit him. Instead I thought it might be a good idea to share that experience on this blog.

My current team has been using Extjs for about 2 years. I don't consider myself an expert nor do I actually like developing in a language that needs to work in multiple browsers. We started out with version 2.x and late last year converted our projects to version 3, which was no easy task.

I will say Extjs has impressed this Java developer. Their documentation and examples are excellent and feedback on the forums is great. Most of my team didn't have any prior heavy javascript backgrounds and all have been able to come up to speed quickly. Right now I think the biggest drawback is the GPL licensing of version 3. I understand why they did it, but doesn't mean I have to like it.

Extjs Resources

  • Examples - Extjs comes with a nice library of great examples on all kinds of things you can do with Extjs. All the examples also come with the download. One of the neatest examples is the Advanced Grid Filtering. One of the main reasons we upgraded to version 3 was for this feature.
  • API Documentation - The javadoc for extjs. This is the most important and most referenced documentation for any extjs developer. Keep in mind this only shows the most recent version. If you want the API documentation for previous versions, you'll need to download that version of Extjs. The download includes the same API documentation. For local installation or offline reference, there is also a very nice Adobe Air app (see download page).
  • Forums - The forums are very active and we have received a lot of help. We also have purchased the premium support and overall I'd say it was worth it. Especially since we no longer have that goto javascript guru anymore (yeah you know who you are).
  • Community Plugins - It's not huge but between the examples noted above and the community plugins we have been able to reuse a lot and create some neat stuff pretty fast.
  • Blogs - I'd recommend subscribing to the official Extjs blog as well as this search related feed from DZone. And, just because I think it is cool, here is Google Reader bundle I created that captures everything I tag with extjs. Finally, here is a search on my blog about articles I have written about Extjs.
  • Books - According to Amazon there are multiple books out right now about developing with Extjs. I have "Learning Ex JS" by Shea Frederick.
Miscellaneous
  • Javascript/CSS Consolidator - One of the best moves we made was investing in a javascript/css consolidator. Props go out to Joe Kueser for doing this for us. After a few months of development, our application started to grind to a slow crawl as several (80+) javascript and css files where being downloaded. Using the jawr framework we reduced that to around 20 GET requests which improved performance dramatically. Not only does it combine multiple files into one but it also minifies them (removes comments and spaces, etc) and supports gzipping them. I have been very impressed with jawr. It has exceeded our expectations and I would recommend it to anyone.
  • Development Environment - No web developer would be complete without Firefox and Firebug. But for those occansional nasty IE issues I haven't found the perfect solution. In the past I have had some luck with jsdt. Currently I am using IE8 in virutalbox in IE7 mode with it's built in Developer Tools. Although not firebug, it's the closest I have come to find. For those really bad IE6 issues I have used the Internet Explorer Developer Toolbar which beats scattered alerts in your code and is better than nothing.
  • Extjs version 2.x or 3.x - There has been a lot of history surrounding the licensing strategy so I won't go into all of that. Just know that version 2.x is LGPL and can be used or embedded into your applications for free. Version 3.x is GPL which means if your project isn't some internal IT app or also GPL you need to purchase developer licenses.
  • Public Uses - Here are just a few well known sites I have noticed using Extjs: Quicken Online and Dow Jones Industrial Index.
  • jQuery Integration - In case you want to combine jQuery and Extjs you can by using the extjs-jquery-adapter available in the download. I haven't really used jQuery a lot but from what I have read and heard, it's a great javascript library that supports animation and DOM querying. You can do these things in Extjs, but jQuery looks like it might do it better and cleaner. I think Extjs excels in providing out of the box prebuilt and customizable Widgets/Components like the Grid (Table).

Thursday, January 28, 2010

Precompiling JSPs for WebSphere 6.1

Don't envy me..........Your about to witness my efforts from the past 2-3 weeks.

For the past week I have been trying to get precompiled JSPs working for WebSphere 6.1, because our target environment does not include the Java Development Kit (JDK) for security reasons. The following is a brief explanation on how to precompile JSPs for WebSphere 6.1 using maven2. And as an added bonus, I'll also explain how to use the maven-was6-plugin to automate deployment of a WAR using Jython.

Precompiling JSPs for WebSphere
First off, let me say, this was not fun. I spent a ton of time trying to figure out why the precompiled JSPs worked on JBoss 4.2.1, but not WebSphere 6.1. Both maven jspc plugins (jspc-maven-plugin and maven-jetty-jspc-plugin) produced precompiled JSPs that worked on JBoss, but not WebSphere. From what I could tell it all boiled down to the fact that JBoss 4.2.1 includes the 2.1 version of jsp-api while WebSphere 6.1.0.27 includes the 2.0 version. Just so you believe me, the 2 conflicting jars in WebSphere were: $WAS_HOME/lib/j2ee.jar and $WAS_HOME/plugins/com.ibm.ws.webcontainer_2.0.0.jar. On the plus side, I did find a great Java decompiler for linux called jd-gui. I'd highly recommend it for any operation system. It uses jad, but provides a nice GUI interface that even lets you open a jar and explore any .class file.

In summary, based on my experience, the 2 existing maven jspc plugins do not create compatible precompiled JSPs with WebSphere 6.1.0.27. Surprisingly, the solution that ended up working was using the JspBatchCompiler.sh script that comes with WebSphere.

The JspBatchCompiler.sh was exactly what we needed. Thankfully, you can give this script a WAR or EAR file and it will explode it, precompile all the JSPs, and repackage it back up again. This script is located under $WAS_HOME/bin. Once I verified it by manually running the script, the next step was to automate it using maven. Since I wasted so much time figuring everything else out, I didn't spend a ton of time improving the maven portion. Instead I decided to go with what I knew and that was using the exec-maven-plugin to run the script. The following is the profile I used to precompile the JSPs in the WARs located in the EAR.



<profile>
<id>precompile</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>precompile-was-ear-jsps</id>
<phase>validate</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>${wasHome}/bin/JspBatchCompiler.sh</executable>
<arguments>
<argument>-ear.path</argument>
<argument>${pom.basedir}/target/${project.artifactId}-${project.version}.${project.packaging}</argument>
<argument>-jdkSourceLevel</argument>
<argument>15</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>rename-replace-was-ear</id>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<move file="${java.io.tmpdir}/${project.artifactId}-${project.version}.${project.packaging}"
tofile="${pom.basedir}/target/${project.artifactId}_jspc-${project.version}.${project.packaging}"/>
<delete file="${pom.basedir}/target/${project.artifactId}-${project.version}.${project.packaging}"/>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
The first plugin section uses the exec-maven-plugin to run the JspBatchCompiler.sh and takes the EAR as input. The second plugin section uses the maven-antrun-plugin to basically rename the EAR to include a keyword (_jspc) in the EAR filename so everyone knows when they have an EAR with precompiled JSPs. It then moves the new EAR from it's tmp location back to the modules target directory where everything expects it to be. Once that is done, it removes the old EAR to avoid confusion.

About the only improvement that could be made is making it portable to other Operating Systems like Windows. This "could" be accomplished by using the JspC ant task WebSphere provides, but I couldn't find any good examples of how to do that via maven, so I took a rain check.

Deploy WAR to WebSphere 6.1
This Jython code snippet literally saved our sprint. I am not sure how I would have automated deploying and undeploying a WAR via maven without it as the maven-was6-plugin really only works with EARs. This is because when deploying a WAR you need to provide the WARs contextroot, which the plugin currently doesn't support (MWAS-59). I was however able to call the Jython script from the maven-was6-plugin to undeploy and deploy a WAR.

The following profiles and Jython scripts show how to use maven and Jython to undeploy and deploy a WAR. The Jython scripts exist in files under the same directory as the pom.



<profile>
<id>undeploy-war</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>was6-maven-plugin</artifactId>
<executions>
<execution>
<id>undeploy</id>
<phase>validate</phase>
<goals>
<goal>wsAdmin</goal>
</goals>
</execution>
</executions>
<configuration>
<wasHome>${wasHome}</wasHome>
<profileName>AppSrv01</profileName>
<conntype>SOAP</conntype>
<applicationName>petstore_war</applicationName>
<earFile>${pom.basedir}/target/petstore.war</earFile>
<updateExisting>false</updateExisting>
<language>jython</language>
<script>uninstallApp.py</script>
<host>localhost</host>
</configuration>
</plugin>
</plugins>
</build>
</profile>

# File: uninstallApp.py
# Jython script to undeploy WAR
# FYI, the was6 plugin does support the ability to pass in params to the jython script
cellName = 'testbed01Node01Cell'
nodeName = 'testbed01Node01'
serverName = 'server1'

#Install the app
print "Installing App: "
AdminApp.install("../petstore.war", "-contextroot /petstore -defaultbinding.virtual.host default_host -usedefaultbindings");
AdminConfig.save();

#Start the app
apps = AdminApp.list().split("\n");
theApp = ""
for iApp in apps:
if str(iApp).find("petstore") >= 0:
theApp = iApp;
print "Starting App: ", theApp
appManager = AdminControl.queryNames('cell='+cellName+',node='+nodeName+',type=ApplicationManager,process='+serverName+',*')
AdminControl.invoke(appManager, 'startApplication', theApp)
print "Application installed and started successfuly!"

Here is the profile and Jython script I used to Deploy a WAR:


<profile>
<id>deploy-war</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>was6-maven-plugin</artifactId>
<executions>
<execution>
<id>deploy</id>
<phase>validate</phase>
<goals>
<goal>wsAdmin</goal>
</goals>
</execution>
</executions>
<configuration>
<wasHome>${wasHome}</wasHome>
<profileName>AppSrv01</profileName>
<conntype>SOAP</conntype>
<applicationName>petstore_war</applicationName>
<earFile>${pom.basedir}/target/petstore.war</earFile>
<updateExisting>false</updateExisting>
<language>jython</language>
<script>installApp.py</script>
<host>localhost</host>
</configuration>
</plugin>
</plugins>
</build>
</profile>
# File: installApp.py
# Jython script to deploy WAR
# FYI, the was6 plugin does support the ability to pass in params to the jython script
cellName = 'testbed01Node01Cell'
nodeName = 'testbed01Node01'
serverName = 'server1'

#Install the app
print "Installing App: "
AdminApp.install("../petstore.war", "-contextroot /petstore -defaultbinding.virtual.host default_host -usedefaultbindings");
AdminConfig.save();

#Start the app
apps = AdminApp.list().split("\n");
theApp = ""
for iApp in apps:
if str(iApp).find("petstore") >= 0:
theApp = iApp;
print "Starting App: ", theApp
appManager = AdminControl.queryNames('cell='+cellName+',node='+nodeName+',type=ApplicationManager,process='+serverName+',*')
AdminControl.invoke(appManager, 'startApplication', theApp)
print "Application installed and started successfuly!"

That's it. The Jython scripts could be improved by making the cell, node, and server names configurable instead of hardcoded and it "appears" the maven-was6-plugin supports passing in properties, but I just didn't have the time to figure it out at the moment.

By the way, I hope maven 3 has solved the XML verbosity when it comes to doing simple things like creating profiles. That's a lot of XML to do very little.

Thursday, January 14, 2010

DZone Top Links Feed


I have a problem: The DZone Top Links section is great but doesn't support an RSS feed. It has feeds for just about everything else. This would be like digg or tweetmeme not having a feed for their most popular links. What makes it worse is I read a majority of their Top Links, but in order to do so I have to keep a tab open in firefox. Wouldn't it be great if I could just subscribe via Google Reader? And now thanks to Yahoo Pipes's screen scrapping capability you can.

Click this link to subscribe to the DZone Top Links Feed: http://pipes.yahoo.com/jlorenzen/dzonetoplinks.

This was accomplished by cloning my RssHuskerPedia pipe and changing a few things around. These pipes depend on the Fetch Page module which essentially lets you scrap the page allowing you to create a list. This feed doesn't contain all the metadeta that would normally come from an official DZone feed, but it supports the basics and prevents me from missing great articles that I would have normally missed.

Please vote this up on DZone to get the word out and hopefully DZone will create an official one. If it gets voted up enough and makes it to the Top Links section, hopefully you won't get stuck in an infinite loop.