tag:blogger.com,1999:blog-12806194399150493832024-03-18T04:47:29.048-05:00James Lorenzen's BlogAddicted to Learningjlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comBlogger144125tag:blogger.com,1999:blog-1280619439915049383.post-76540719920513960272022-04-07T12:18:00.003-05:002022-04-07T12:18:51.103-05:00Moved to blog.jlo.failI've moved! I've switched blogging platforms (i.e. <a href="https://hashnode.com/" target="_blank">hashnode</a>) and bought my own domain: <a href="https://blog.jlo.fail" target="_blank">https://blog.jlo.fail</a>. Check out my first blog "<a href="https://blog.jlo.fail/why-a-fail-domain" target="_blank">Why a .fail domain</a>?". It will be the home to all my new articles, so subscribe to my newsletter. I plan on keeping all the articles here on blogpost without migrating them to my new domain. Blogspot was great and it met the need of being a low-barrier way to start blogging. It was just time to buy my own domain and have better support for writing developer-focused articles that use markdown. Anyways, see ya there.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-12942499472723642492020-10-08T08:12:00.002-05:002020-10-08T08:15:08.094-05:00H2 URL with PostgreSQL mode and jOOQJust a quick code snippet to show my future self how I was able to finally get <a href="https://www.h2database.com/html/main.html" target="_blank">H2</a> working with a <a href="https://www.jooq.org/" target="_blank">jOOQ</a> query that in production hits a PostgreSQL database. I spent hours trying to get this to work. H2 kept complaining about not being able to find the <code>public</code> schema when our jOOQ query was attempting to execut a query like <code>select * from "public"."customer"</code>. It's wasn't the schema it was having trouble with but the quotes. I figured this out by pulling up the H2 console and trying the query with and without quotes and without the quotes it worked. So then I figured out that I needed to somehow tell H2 this was for PostgreSQL. H2 has a <a href="http://www.h2database.com/html/features.html?highlight=init&search=init#compatibility" target="_blank">MODE</a> option that you can set.<div><br /></div><div>The following is the code snippet that finally worked. I have not optimized it. The <code>url</code> value contains all my attempts to get this thing to work. I think it can definitely be trimmed down but that's out of scope for this blog. I'm just trying to save others some time including my future self.<pre class="brush: java">import org.springframework.boot.jdbc.DataSourceBuilder;
DataSourceBuilder.create()
.driverClassName("org.h2.Driver")
.url("jdbc:h2:~/.h2/testdb"
+ ";TRACE_LEVEL_SYSTEM_OUT=3"
+ ";INIT=create schema if not exists public AUTHORIZATION sa\\"
+ ";RUNSCRIPT FROM 'classpath:schema.sql'"
+ ";DB_CLOSE_ON_EXIT=FALSE"
+ ";SCHEMA=public"
+ ";MODE=PostgreSQL"
+ ";DATABASE_TO_LOWER=TRUE")
.username("sa")
.password("")
.build();
</pre></div>jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-13526321993899118302020-02-27T16:21:00.000-06:002020-02-27T16:21:50.032-06:00Parameterized Tests: An Underused TechniqueOver the past several years I've come to really value Parameterized Tests. I think they are an underused technique that should be considered more often. They make it easier to cover more of the test space and reduce cognitive load by being succinct.<br />
<br />
So what are Parameterized Tests? Well they allow developers to run the same test multiple times over a set of different values. Here is a simple Java example using Java's <a href="https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html" target="_blank">Stream</a> class with <a href="https://joel-costigliola.github.io/assertj/" target="_blank">AssertJ</a>:<br />
<pre class="brush: java">Stream.of(null, "", " ", "false", "FALSE")
.forEach(value -> assertThat(Boolean.valueOf(value)).isFalse());</pre>
<h3>
Cucumber Inspiration</h3>
<br />
I first discovered this technique with <a href="https://cucumber.io/" target="_blank">Cucumber</a>, which has the ability to run a scenario multiple times over a set of different values using a <a href="https://cucumber.io/docs/gherkin/reference/#scenario-outline" target="_blank">Scenario Outline</a>. Prior to this I was in the habit of copying and pasting a <code>Scenario</code> and tweaking the <code>Given</code> and <code>Then</code> steps; or I wouldn't test all the sad path combinations in an effort to reduce test maintenance. Learning about the <code>Scenario Outline</code> changed my world. I was able to combine multiple scenarios into a single scenario and the result was more comprehendible and easier to maintain. For example, this is a common <code>Scenario Outline</code> we might use to test an API endpoint to ensure it validates the input. Previously, this would have been spread across multiple scenarios or not tested at all.<br />
<br />
<pre class="brush: plain">Scenario Outline: Should return a 400 when signing up with an invalid email address
When I attempt to sign up with email "<email>"
Then I should be returned a "400 Bad Request" status code
Examples:
| Email |
| ${absent} |
| ${blank} |
| ${whitespace} |
| mailto:john.doe@example.com |
| john.doe@example |
| john.doe@example. |
| john.doe@example.com. |
| john.doe@@example.com |
| john.doe@example..com |
| john doe@example.com |
</pre>
<h3>
Recognizing Pattern</h3>
<br />
Once I discovered this concept and got comfortable with it, I explored ways to introduce it in lower level tests like unit and integration tests. But the trick was recognizing when to apply it. Basically, any time you find yourself copying and pasting a test and tweaking the arrange and assert statements you've probably got a good candidate for a Parameterized Test.<br />
<br />
For example, let's say you have a class that determines if a number is a prime number or not. Before Parameterized Tests you might have been tempted to write something like the following:<br />
<br />
<pre class="brush: java">public class PrimeNumberCheckerTestOldSchool {
@Test
public void shouldReturnTrueForPrimeNumber() {
assertThat(PrimeNumberChecker.check(2)).isTrue();
}
@Test
public void shouldReturnFalseForNonPrimeNumber() {
assertThat(PrimeNumberChecker.check(6)).isFalse();
}
@Test
public void shouldReturnFalseForAnotherNonPrimeNumber() {
assertThat(PrimeNumberChecker.check(9)).isFalse();
}
@Test
public void shouldReturnTrueForAnotherPrimeNumber() {
assertThat(PrimeNumberChecker.check(17)).isTrue();
}
}
</pre>
<br />
Here you can see we are just repeating the test with different inputs and a different result. And since it's spread out across multiple tests it's hard to comprehend and this solution doesn't scale well if we want to add additional tests.<br />
<br />
The ideal solution would be to define a sort of <a href="https://en.wikipedia.org/wiki/Truth_table" target="_blank">truth table</a>, like Cucumber's <code>Example</code> table, that includes a combination of the inputs and expected result in a single test. This is where Parameterized Tests comes in.<br />
<h3>
<br />JUnit 5</h3>
<br />
My preferred way to write Parameterized Tests is with JUnit 5 (see <a href="https://github.com/junit-team/junit4/wiki/Parameterized-tests" target="_blank">Parameterized Tests</a> with JUnit 5). It's a big improvement over the <a href="https://github.com/junit-team/junit4/wiki/Parameterized-tests" target="_blank">JUnit 4 way</a> of doing Parameterized Tests. We can rewrite the earlier example using the <code>@ParameterizedTest</code> and <code>@CsvSource</code> annotations (see other <a href="https://junit.org/junit5/docs/current/user-guide/#writing-tests-parameterized-tests-sources" target="_blank">source annotations</a>):<br />
<br />
<pre class="brush: java">public class PrimeNumberCheckerTestJUnit5 {
@ParameterizedTest
@CsvSource({
"2, true",
"6, false",
"9, false",
"17, true"
})
public void shouldCheckPrimality(final int number, final boolean expected) {
assertThat(PrimeNumberChecker.check(number)).isEqualTo(expected);
}
}
</pre>
<br />
Here are some of the benefits of this approach:<br />
<ul>
<li>Easier to comprehend since it's not spread out across 4 test methods.</li>
<li>Easier to add/remove additional scenarios.</li>
<li>Easier to delete if the Method Under Test (MUT) is removed; all we will have to do is delete one test method!</li>
</ul>
<br />
To write Parameterized Tests with JUnit 5 you need to include the following dependencies. Note the JUnit 5 documentation states this is an "experimental" feature.<br />
<br />
<pre class="brush: xml"><dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.5.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-params</artifactId>
<version>5.5.2</version>
<scope>test</scope>
</dependency></pre>
<h3>
<br />JUnit 4</h3>
<br />
A quick note about JUnit 4. If your team/project is still using JUnit 4, you can use both JUnit 4 and JUnit 5 simultaneously. JUnit 5 provides a gentle <a href="https://junit.org/junit5/docs/current/user-guide/#migrating-from-junit4" target="_blank">migration path</a>. Just include the following dependency. I also recommend reading the <a href="https://junit.org/junit5/docs/current/user-guide/#migrating-from-junit4-tips" target="_blank">Migration Tips</a>.<br />
<br />
<pre class="brush: xml"><dependency>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
<version>5.5.2</version>
<scope>test</scope>
</dependency></pre>
<br />
Now you are safe to write JUnit 5 tests without having to migrate all your existing JUnit 4 tests and be able to take advantage of the new <code>@ParameterizedTest</code> annotation.<br />
<h3>
<br />Conclusion</h3>
<br />
Well hopefully I've convinced you of the power of Parameterized Tests and you'll look for the right opportunity to try them out. And keep in mind while these examples use Java, this technique should also be applicable to other languages. So do some research and see if your language provides them and if not maybe create your own. That's what we did before JUnit 5 was out and we weren't happy with the JUnit 4 way of doing Parameterized Tests.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.com17tag:blogger.com,1999:blog-1280619439915049383.post-35055987197445174632015-01-09T00:21:00.006-06:002015-01-09T00:38:36.440-06:00Setting Environment Variables for Docker with FigFor the past few months I've been playing around with <a href="https://www.docker.com/">Docker</a>, and so far I've had a ton of fun. The <a href="https://docs.docker.com/">documentation</a> is excellent, and in one simple <a href="https://docs.docker.com/userguide/dockerizing/#hello-world">command</a> you can start experimenting. After going through the tutorial, one of my first goals was to figure out the best way to create a Docker image for a <a href="http://projects.spring.io/spring-boot/">spring-boot</a> service. My initial goals were to make it easy to set environment variables, since our projects follow the <a href="http://12factor.net/">twelve-factor</a> methodology. One of the several factors we follow is the third factor (III. Config), which recommends storing the config in the environment. While this has many benefits, one of the downsides is the tendency to create a lot of environment variables because it's quick, easy, and well defined. This makes it difficult to configure, test, and run the service. But as we will see, <a href="http://www.fig.sh/">Fig</a> will not only make it easy to set environment variables, it will also provide many other benefits.<br />
<br />
<b>Docker Image</b><br />
Let's first start with a pretend service called the logging service. It's a Java service created with spring-boot. Here is a basic Dockerfile:
<br />
<pre class="brush: bash">FROM dockerfile/java:oracle-java8
COPY logging-service-0.1.0.jar /data/
EXPOSE 8080
CMD ["java", "-jar", "logging-service-0.1.0.jar"]
</pre>
<a href="https://docs.docker.com/reference/builder/#from">FROM</a> - the base image I start with. In this case it's the <a href="https://registry.hub.docker.com/u/dockerfile/java/">dockerfile/java</a> base image with the Oracle JDK 8 tag.<br />
<a href="https://docs.docker.com/reference/builder/#copy">COPY</a> - here I copy over the jar so it's present in the image<br />
<a href="https://docs.docker.com/reference/builder/#expose">EXPOSE</a> - this tells Docker the container will be listening on port 8080 at runtime<br />
<a href="https://docs.docker.com/reference/builder/#cmd">CMD</a> - here I've defined a default command to run which will start the service<br />
<br />
Next we need to build the image:<br />
<code>docker build --tag="jlorenzen/logging-service:v1" .</code><br />
<br />
<b>Docker Run</b><br />
Now we could run our new service by executing this command:<br />
<code>docker run -dP jlorenzen/logging-service</code><br />
<br />
That's great but let's imagine the logging-service requires the following environment variables: ENV_1 and ENV_2. Here is how you would run the service while also setting the environment variables:<br />
<code>docker run -dP -e ENV_1=value1 -e ENV_2=value2 jlorenzen/logging-service</code><br />
<br />
That's a basic example, but you can image how nasty it could get if your service required a dozen or more environment variables. The <code><a href="https://docs.docker.com/reference/commandline/cli/#run">docker run</a></code> command also has some other nice options for setting environment variables. For example, when using the <code>-e</code> option, if you provide just the name like <code>-e ENV_1</code> without a value, than that variables current value will be used. Or you can use the <code>--env-file</code> option to specify a file that contains a list of environment variables. While this all works, it's really not enjoyable having to remember all those options and commands. That is where Fig can help. And it not only helps us easily set environment variables, but it also makes creating containers simpler and reproducable by anyone anywhere.<br />
<br />
<b>Fig</b><br />
Fig is basically a simple utility that wraps Docker making it easier to create and manage Docker containers. In our case we will use it to run our logging-service image and set the environment variables. Here is a simple <code>fig.yml</code> file:<br />
<pre class="brush: bash">logging-service:
image: jlorenzen/logging-service
ports:
- "8080"
environment:
- ENV_1
- ENV_2
</pre>
As you can see I didn't specify any values for the environment variables. That's because I already have them defined in my host using <a href="http://direnv.net/">direnv</a> and Fig will just automatically use them. So in my case I have a local <code>.envrc</code> file that contains the following:<br />
<pre>export ENV_1=value1
export ENV_2=value2
</pre>
This allows me to set all my environment variables in one place. Here is the command I can use to start the container:<br />
<code>fig up</code><br />
<br />
That's it! Much simpler than the corresponding <code>docker run</code> command.<br />
<br />
<b>Ideal World</b><br />
What would be the best of both worlds is if Fig supported the <code>docker run --env-file</code> option and that it could read in a file containing <code>export</code> commands which is required by direnv. It seems support for the <code>--env-file</code> option in Fig is <a href="https://twitter.com/jlorenzen/status/553195845135650816">coming soon</a>, so we are halfway therejlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-89636959937923549492014-11-18T15:52:00.000-06:002014-11-18T21:51:53.418-06:00Example Using Grails PromisesI was recently playing around with the <a href="http://grails.org/doc/2.3.0.M1/guide/async.html">Asynchronous Programming</a> features in <a href="https://grails.org/">Grails</a> using Promises, and wanted to share an example that went a little beyond a simple example. In case you are using an older version of Grails, the asynchronous features where added in Grails 2.3. While there are a lot of useful asynchronous features in Grails, for this article I'll only focus on using Promises. Promises are a common concept being introduced in many concurrency frameworks. They are similar to Java's <a href="https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Future.html">java.util.concurrent.Future</a> class, but like all things with Grails/Groovy, Grails has made them easier to use.<br />
<br />
First, before showing you an example, go ahead and run <code>grails console</code> under an existing grails project. If you don't have one, install grails (see <a href="http://gvmtool.net/">GVM</a>) and run <code>grails create-app</code>. Using the grails console will allow you to quickly run these examples and experiment on your own.<br />
<br />
<b>Basic Example</b><br />
<pre class="brush: groovy">import static grails.async.Promises.task
import static grails.async.Promises.waitAll
def task1 = task {
println "task1 - starting"
Thread.sleep(5000)
println "task1 - ending"
}
def task2 = task {
println "task2 - starting"
Thread.sleep(1000)
println "task2 - ending"
}
waitAll(task1, task2)
</pre>
This would output:<br />
<pre>task1 - starting
task2 - starting
task2 - ending
task1 - ending
</pre>
<br />
<b>More Complex Example</b><br />
Let's say you wanted to list the states of 5 zip codes. Here is what that would look like if we did it synchronously:<br />
<br />
<pre class="brush: groovy">["74172", "64840", "67202", "68508", "37201"].each { z ->
println "getting state for zip code: $z"
def response = new URL("http://zip.getziptastic.com/v2/US/$z").content.text
def json = grails.converters.JSON.parse(response)
println "zip code $z is in state $json.state"
}
</pre>
<pre></pre>
<pre></pre>
<pre></pre>
And the output for that would look like:<br />
<pre>getting state for zip code: 74172
zip code 74172 is in state Oklahoma
getting state for zip code: 64840
zip code 64840 is in state Missouri
getting state for zip code: 67202
zip code 67202 is in state Kansas
getting state for zip code: 68508
zip code 68508 is in state Nebraska
getting state for zip code: 37201
zip code 37201 is in state Tennessee
</pre>
<br />
And here is what it would look like using Grails Promises to make it asynchronous:<br />
<br />
<pre class="brush: groovy">import static grails.async.Promises.task
import static grails.async.Promises.waitAll
def tasks = ["74172", "64840", "67202", "68508", "37201"].collect { z ->
task {
println "getting state for zip code: $z"
def response = new URL("http://zip.getziptastic.com/v2/US/$z").content.text
def json = grails.converters.JSON.parse(response)
println "zip code $z is in state $json.state"
}
}
waitAll(tasks)
</pre>
<br />
The asynchronous output would look like this:<br />
<pre>getting state for zip code: 37201
getting state for zip code: 68508
getting state for zip code: 67202
getting state for zip code: 64840
getting state for zip code: 74172
zip code 74172 is in state Oklahoma
zip code 37201 is in state Tennessee
zip code 64840 is in state Missouri
zip code 68508 is in state Nebraska
zip code 67202 is in state Kansas
</pre>
<br />
Each time you run the asynchronous version it will output a different order because the tasks are running asynchronously. The <code>waitAll()</code> method will block until all tasks complete.<br />
<br />
Thanks to <a href="https://twitter.com/jeremydanderson">jeremydanderson</a> for helping me figure out how best to use the <code>collect</code> method.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-12167914298756435402014-11-11T22:47:00.002-06:002014-11-11T22:50:33.633-06:00Groovy Spring Bean for Static FactoryI started playing around with <a href="https://grails.org/">Grails</a> again recently and ran into a problem trying to create a bean in the Grails <a href="http://grails.org/doc/latest/guide/spring.html">resources.groovy</a> file for a static factory. After several frustrating hours trying to find the right combination, I eventually stumbled upon an answer.<br />
<br />
The factory I was trying to create a bean from was the JAX-RS Client API class <a href="https://docs.oracle.com/javaee/7/api/javax/ws/rs/client/ClientBuilder.html">ClientBuilder.newClient()</a> which returns a <a href="https://docs.oracle.com/javaee/7/api/javax/ws/rs/client/Client.html">Client</a> object.<br />
<br />
Here is what the bean definition looks like in my Grails resources.groovy file:<br />
<pre class="brush: groovy">import javax.ws.rs.client.ClientBuilder
beans = {
httpClient(ClientBuilder) { bean ->
bean.factoryMethod = 'newClient'
bean.destroyMethod = 'close'
}
}</pre>
<br />
Then in your Grails service or controller you can autowire or inject the bean by doing the following:<br />
<pre class="brush: groovy">class FooService {
def httpClient
def get(url) {
return httpClient.target(url).request().get()
}
}
</pre>
<br />
Hope it helps the next person.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-44975120066655863992014-07-21T22:57:00.000-05:002014-07-21T23:09:22.781-05:00Disable Spring Boot Production Ready ServicesFor the past two months I've had the pleasure of working with <a href="https://twitter.com/DevWithPurpose">The Lampo Group</a> developing Hypermedia-Driven REST services using <a href="http://projects.spring.io/spring-boot/">Spring Boot</a>. Spring Boot makes it super simple to get started writing REST services. One of its biggest advantages is by default it embeds tomcat as the servlet container, allowing the developer to focus on other important things. Another great thing about Spring Boot is it includes the ability to easily enable what they call <a href="http://docs.spring.io/spring-boot/docs/1.1.4.RELEASE/reference/htmlsingle/#production-ready">Production-Ready</a> or Production-Grade Services. These services allow you to monitor and manage your application when its pushed to production and it's as easy as adding a dependency to your project. Unfortunately, it wasn't well documented on how to disable some of these services. But before I show you how to disable them, let me first show you how to enable them.<br />
<br />
<b>Enabling Spring Boot's Production-Ready Services</b><br />
One of the reasons we wanted to enable some of the production-ready services was our target production environment was Amazon Web Services (AWS). As a part of that they support Elastic Load Balancing which allows one to configure a <a href="http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html#healthcheck">health check</a> endpoint. It's basically an endpoint you configure in AWS that gets pinged to make sure the EC2 instance is up and running. As luck would have it, one of the services included in Spring Boot's production-ready services was a health endpoint.<br />
<br />
To enable the production-ready services all you have to do is add a dependency to your project. If you are using maven all you have to do is add the following to your <code>pom.xml</code>.<br />
<br />
<pre class="brush: xml"><dependency>
<groupid>org.springframework.boot</groupid>
<artifactid>spring-boot-starter-actuator</artifactid>
<version>1.1.4.RELEASE</version>
</dependency>
</pre>
<br />
<br />
After you make the change to your <code>pom.xml</code>, just rebuild your project and you should be able to access <code>http://localhost:8080/health</code> and see something like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5AhdIoWQZkz_ToCKlxDrXeamgCzCG8VBnXDtcxpuRCDrwg00pGmhbqsKQ4a40VrxMSB-0Vf2myvxJTjB6wjd0zRzJKXd0sozAT3ktAjJjAXXrvc9ncG3iFCNDHKVL_HIFGxDp-uVKCfDB/s1600/Screen+Shot+2014-07-21+at+10.44.59+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5AhdIoWQZkz_ToCKlxDrXeamgCzCG8VBnXDtcxpuRCDrwg00pGmhbqsKQ4a40VrxMSB-0Vf2myvxJTjB6wjd0zRzJKXd0sozAT3ktAjJjAXXrvc9ncG3iFCNDHKVL_HIFGxDp-uVKCfDB/s1600/Screen+Shot+2014-07-21+at+10.44.59+PM.png" height="196" width="320" /></a></div>
<br />
<br />
Not only does it add a health endpoint, but it also adds: autoconfig, beans, configprops, dump, env, info, metrics, mappings, shutdown (not enabled by default over HTTP), and trace. Like us, you might not want to expose all these endpoints in a production environment. In fact, all we wanted to enable was the health and info endpoints. The following will show you how to disable each service individually and how to re-enable them dynamically at runtime.<br />
<br />
<b>Disabling Spring Boot's Production-Ready Services</b><br />
When you include the spring-boot-starter-actuator dependency in your project, it automatically exposes 11 different endpoints in your project. What I wanted to be able to do was disable all endpoints except health and info, but also have the ability to enable the other services at runtime via environment variables.<br />
<br />
To disable some of the production-ready services add the following to your <code>/src/main/resources</code> file. These will be your default settings for your project.<br />
<br />
<pre class="brush: java">endpoints.autoconfig.enabled=false
endpoints.beans.enabled=false
endpoints.configprops.enabled=false
endpoints.dump.enabled=false
endpoints.env.enabled=false
endpoints.health.enabled=true
endpoints.info.enabled=true
endpoints.metrics.enabled=false
endpoints.mappings.enabled=false
endpoints.shutdown.enabled=false
endpoints.trace.enabled=false
</pre>
<br />
<br />
To enable/disable endpoints externally at runtime you can follow any one of these <a href="http://docs.spring.io/spring-boot/docs/1.1.4.RELEASE/reference/htmlsingle/#boot-features-external-config">steps</a>. Since our project is following the <a href="http://12factor.net/">12-factor app</a> rules we needed the ability to enable/disable endpoints by setting environment variables. So for example, if I wanted to enable the metrics endpoint at runtime, I would set the following environment variable.<br />
<br />
<pre class="brush: bash">export ENDPOINTS_METRICS_ENABLED=true
</pre>
<br />
After restarting Sprint Boot you can access the metrics endpoint at <code>http://localhost:8080/metrics</code>. Note, that as of version 1.0.1.RELEASE, you are unable to disable the mappings endpoint, but this was quickly <a href="https://github.com/spring-projects/spring-boot/issues/1185">fixed</a> in 1.1.3.RELEASE.<br />
<br />
In summary, I've been very impressed with how easy it is to work with Spring Boot and the production-ready services. In another article I'll cover how to use maven filtering, along with the <a href="https://github.com/ktoso/maven-git-commit-id-plugin">git-commit-id-plugin</a>, to display project information in the info endpoint.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-1218277971915017142013-07-23T23:28:00.000-05:002013-07-23T23:28:37.140-05:00Remember Target URL with Spring Security and Jasig CASI recently ran into a unique issue when combining <a href="http://www.springsource.org/spring-security">Spring Security</a>, <a href="http://www.jasig.org/cas">Jasig CAS</a>, and the <a href="https://www.owfgoss.org/">Ozone Widget Framework</a> (OWF). Basically, the original target URL was not being remembered with all the <a href="http://static.springsource.org/spring-security/site/docs/3.1.x/reference/cas.html">redirects</a> from the CAS client filters, to CAS server, and then back to Spring Security. For example, if a user browsed to https://localhost/myapp/widget.jsp, they would be redirected to https://localhost/cas. Upon successful login, the user would be incorrectly redirected back to https://localhost/myapp instead of the original target URL.<br />
<br />
<b>SavedRequestAwareAuthenticationSuccessHandler almost worked</b> <br />
I thought I had found a solution by using Spring Security's <a href="http://roguecoder.wordpress.com/2010/05/14/jasig-cas-spring-security-and-bookmarking/">SavedRequestAwareAuthenticationSuccessHandler</a>, and that did work for single requests, but it did not work in an environment like OWF where multiple widgets are loaded simultaneously. The reason is because SavedRequestAwareAuthenticationSuccessHandler extracts the original target URL from the session, which is originally set by the ExceptionTranslationFilter. Since it used the session there could only be one original target URL. So if I had a workspace in OWF that loaded two different widgets from the same WAR (/myapp/widget1.jsp and /myapp/widget2.jsp), then there will only be one target URL saved in the session and both widgets would load the last saved URL, which is obviously not good.<br />
<br />
There were several ideas out there, but I really didn't like any of them. Some required you to modify the CAS login form or return some weird javascript in one of the responses. What I wanted was the ability to preserve the original target URL within all the redirect URLs via a parameter. The only problem was, to my knowledge, nothing like this existed in Spring Security. So the following will show you how I extended Spring Security to preserve the original target URL via a parameter. As a simple example, here is a sequence of URLs that I was attempting to support (Note, the URL in the params are typically encoded but I kept them decoded for readability):<br />
<ol>
<li>User browses to: https://localhost/myapp/widget.jsp</li>
<li>CAS Client filters redirect to: https://localhost/cas/login?service=https://localhost/myapp/j_spring_cas_security_check&spring-security-redirect=/widget.jsp</li>
<li>After authentication user is redirected to the URL defined in the service paramter https://localhost/myapp/j_spring_cas_security_check&spring-security-redirect=/widget.jsp which is monitored by Spring Security.</li>
<li>Once Spring Security does it's thing, it needs to redirect to the value in the spring-security-redirect parameter.</li>
</ol>
The following example works with Spring Security 3.1.4.RELEASE and Tomcat 7.0.21. <br />
<ol>
</ol>
<b>Spring Security Application Context File</b><br />
What Spring Security example wouldn't be complete without some XML? The following is the application context file (note, for simplicity I have hardcoded the CAS urls and other values, but these would typically be read in from a properties file):<br />
<br />
<pre class="brush: xml"><sec:http use-expressions="true" entry-point-ref="casEntryPoint">
<sec:intercept-url pattern="/css/**" access="permitAll" />
<sec:intercept-url pattern="/images/**" access="permitAll" />
<sec:intercept-url pattern="/scripts/**" access="permitAll" />
<sec:intercept-url pattern="/**" access="hasRole('ROLE_USER')" requires-channel="https"/>
<sec:custom-filter ref="casFilter" after="CAS_FILTER"/>
</sec:http>
<sec:authentication-manager alias="authenticationManager">
<sec:authentication-provider ref="casAuthProvider" />
</sec:authentication-manager>
<bean id="casEntryPoint" class="com.example.security.cas.web.RememberCasAuthenticationEntryPoint">
<property name="loginUrl" value="https://localhost/cas/login" />
<property name="serviceProperties" ref="serviceProperties" />
<property name="targetUrlParameter" value="spring-security-redirect" />
</bean>
<bean id="casAuthProvider" class="com.example.security.cas.authentication.RememberCasAuthenticationProvider">
<property name="userDetailsService" ref="userService" />
<property name="serviceProperties" ref="serviceProperties" />
<property name="ticketValidator" ref="ticketValidator" />
<property name="key" value="an_id_for_this_auth_provider_only" />
<property name="targetUrlParameter" value="spring-security-redirect" />
</bean>
<!--
- This is the filter that monitors all incoming requests for the url /myapp/j_spring_cas_security_check (sequence #7).
- Sets the targetUrlParameter to redirect to the target URL after authentication.
- Also sets the authenticationDetailsSource to our custom one in order to have access to the HttpServletRequest
- in RememberCasAuthenticationProvider for ticket validation.
- Tried setting the authenticationSuccessHandler to SavedRequestAwareAuthenticationSuccessHandler, and that works
- for a single request, but in OWF if you have two widgets loading different URLs, it doesn't work because
- it loads the saved url from the session object, so both widgets load the same url.
-->
<bean id="casFilter" class="org.springframework.security.cas.web.CasAuthenticationFilter">
<property name="authenticationManager" ref="authenticationManager" />
<property name="authenticationDetailsSource">
<bean class="com.example.security.web.authentication.RememberWebAuthenticationDetailsSource"/>
</property>
<property name="authenticationFailureHandler">
<bean class="org.springframework.security.web.authentication.SimpleUrlAuthenticationFailureHandler">
<property name="defaultFailureUrl" value="/cas_failed.jsp" />
</bean>
</property>
<property name="authenticationSuccessHandler">
<bean class="org.springframework.security.web.authentication.SimpleUrlAuthenticationSuccessHandler">
<property name="defaultTargetUrl" value="/" />
<property name="targetUrlParameter" value="spring-security-redirect" />
</bean>
</property>
<property name="proxyGrantingTicketStorage" ref="proxyGrantingTicketStorage" />
<property name="proxyReceptorUrl" value="/secure/receptor" />
</bean>
</pre>
<br />
<b>CAS Entry Point</b><br />
When dealing with Spring Security it all starts with the http entry-point-ref attribute. Here is the code for RememberCasAuthenticationEntryPoint:<br />
<br />
<pre class="brush: java">package com.example.security.cas.web;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.jasig.cas.client.util.CommonUtils;
import org.springframework.security.cas.web.CasAuthenticationEntryPoint;
/**
* Class which is responsible for remembering the original target url specified by the client.
* Takes the original target url and appends that to the service param used by CAS.
* This will later be used to redirect to the target URL after authentication.
*/
public class RememberCasAuthenticationEntryPoint extends CasAuthenticationEntryPoint {
String targetUrlParameter = "spring-security-redirect";
protected String createServiceUrl(final HttpServletRequest request, final HttpServletResponse response) {
String service = this.serviceProperties.getService();
String servletPath = request.getServletPath();
if (servletPath) {
service += String.format("?%s=%s", this.targetUrlParameter, servletPath);
}
return CommonUtils.constructServiceUrl(null, response, service, null, this.serviceProperties.getArtifactParameter(), this.encodeServiceUrlWithSessionId);
}
}</pre>
<br />
<b>CAS Authentication Provider</b><br />
Next up is the CAS authentication provider which we have defined as RememberCasAuthenticationProvider. This bean is given to the CAS Custom Filter under the alias authenticationManager. Here is the code for RememberCasAuthenticationProvider:<br />
<br />
<pre class="brush: java">package com.example.security.cas.authentication;
import com.example.security.web.authentication.RememberWebAuthenticationDetails;
import org.jasig.cas.client.validation.Assertion;
import org.jasig.cas.client.validation.TicketValidationException;
import org.springframework.security.authentication.BadCredentialsException;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.security.authentication.AccountStatusUserDetailsChecker;
import org.springframework.security.cas.authentication.CasAuthenticationProvider;
import org.springframework.security.cas.authentication.CasAuthenticationToken;
import org.springframework.security.cas.ServiceProperties;
import org.springframework.security.cas.web.CasAuthenticationFilter;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.AuthenticationException;
import org.springframework.security.core.authority.mapping.GrantedAuthoritiesMapper;
import org.springframework.security.core.authority.mapping.NullAuthoritiesMapper;
import org.springframework.security.core.userdetails.*;
/**
* CasAuthenticationProvider that tries to remember the original target url requested by the client.
* The trick is having access to the HttpServletRequest in the authenticateNow() method.
* This is accomplished via the RememberWebAuthenticationDetails class.
* Since authenticateNow() was marked as private in CasAuthenticationProvider I had to also override
* the authenticate() method. Created spring security jira https://jira.springsource.org/browse/SEC-2188
* to address making authenticateNow protected so we don't have to duplicate authenticate().
*/
public class RememberCasAuthenticationProvider extends CasAuthenticationProvider {
UserDetailsChecker userDetailsChecker = new AccountStatusUserDetailsChecker();
ServiceProperties serviceProperties;
GrantedAuthoritiesMapper authoritiesMapper = new NullAuthoritiesMapper();
String targetUrlParameter = "spring-security-redirect";
/**
* Straight copy and paste from CasAuthenticationProvider
* @see spring security jira https://jira.springsource.org/browse/SEC-2188
*/
public Authentication authenticate(Authentication authentication) throws AuthenticationException {
...
}
/**
* The service URL used in ticketValidator.validate() needs to match the service URL given to CAS when
* the ticket was granted.
*/
protected CasAuthenticationToken authenticateNow(final Authentication authentication) throws AuthenticationException {
try {
String targetPath = this.getTargetPath(authentication.getDetails());
def service = String.format("%s?%s", serviceProperties.getService(), targetPath);
final Assertion assertion = this.ticketValidator.validate(authentication.getCredentials().toString(), service);
final UserDetails userDetails = loadUserByAssertion(assertion);
userDetailsChecker.check(userDetails);
return new CasAuthenticationToken(this.key, userDetails, authentication.getCredentials(),
authoritiesMapper.mapAuthorities(userDetails.getAuthorities()), userDetails, assertion);
} catch (final TicketValidationException e) {
throw new BadCredentialsException(e.getMessage(), e);
}
}
/**
* Extracts the original target url form the query string.
* Example query string: spring-security-redirect=/widget.jsp&ticket=ST-112-RiRTVZmzghHO7az5gpJF-cas
*/
protected String getTargetPath(Object authenticationDetails) {
String targetPath = "";
if (authenticationDetails instanceof RememberWebAuthenticationDetails) {
RememberWebAuthenticationDetails details = (RememberWebAuthenticationDetails) authenticationDetails;
String queryString = details.getQueryString();
if (queryString) {
int start = queryString.indexOf(this.targetUrlParameter);
if (start >= 0) {
int end = queryString.indexOf("&", start);
if (end >= 0) {
targetPath = queryString.substring(start, end);
} else {
targetPath = queryString.substring(start);
}
}
}
}
return targetPath;
}
}
</pre>
<br />
<b>Authentication Details Source</b><br />
Since I needed access to the requests query string in the RememberCasAuthenticationProvider.getTargetPath() method, I needed to provide a different WebAuthenticationDetails class. This was accomplished by setting the authenticationDetailsSource property on the CAS Filter. Here is the code for RememberWebAuthenticationDetailsSource:<br />
<br />
<pre class="brush: java">package com.example.security.web.authentication;
import javax.servlet.http.HttpServletRequest;
import org.springframework.security.authentication.AuthenticationDetailsSource;
import org.springframework.security.web.authentication.WebAuthenticationDetails
public class RememberWebAuthenticationDetailsSource implements AuthenticationDetailsSource<HttpServletRequest, WebAuthenticationDetails> {
public WebAuthenticationDetails buildDetails(HttpServletRequest request) {
return new RememberWebAuthenticationDetails(request);
}
}
</pre>
<br />
Here is the code for RememberWebAuthenticationDetails:<br />
<br />
<pre class="brush: java">package com.example.security.web.authentication;
import javax.servlet.http.HttpServletRequest;
import org.springframework.security.web.authentication.WebAuthenticationDetails;
public class RememberWebAuthenticationDetails extends WebAuthenticationDetails {
private final String queryString;
public RememberWebAuthenticationDetails(HttpServletRequest request) {
super(request);
this.queryString = request.getQueryString();
}
public String getQueryString() {
return this.queryString;
}
}
</pre>
<br />
<b>Summary</b><br />
That pretty much does it. I know it's a lot of code, but once I got familiar with Spring Security it really wasn't that much. Once you get this all configured it should just work and the users original target URL can be seen when getting redirected to the CAS login page and then after authentication, the user should be redirected back to the original target URL. And in an OWF environment where multiple URLs from the same WAR are being loaded simultaneously this solution seems to work.<br />
<br />
I do want to mention that we have since noticed that there are conditions where the entire URL is not remembered. Simple URLs like /myapp/widget.jsp work great, but REST URLs like /myapp/api/events/1 are not completely preserved. Nor are params remembered either like /myapp/widget.jsp?id=1. At this time we really don't need that capability but I don't think it would be hard to add it. Most of the work has already been done.<br />
<br />
One other thing. I think this experiment begs the question: why doesn't Spring Security already support something like this out of the box? It would seem like a very common use case. While researching it seems there might be a reluntance to support this feature due to security concerns of a malicious user gaining access to a resource they are not authorized for. I didn't do super extensive testing, but in the testing I did do, I was not able to gain access to resources I was not authorized for. Perhaps when/if I come back to these classes and add support for the noted issues above I might submit a patch back to Spring Security so this can get incorporated into Spring Security.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-62561234032340048562013-06-28T01:34:00.001-05:002013-06-28T09:05:07.423-05:00Setting Gradle home directory and proxy in JenkinsReal quick. Spent the past few hours working around some nasty issues with <a href="http://www.gradle.org/">gradle</a> and <a href="http://jenkins-ci.org/">jenkins</a>. It seems due to a bug, the jenkins gradle plugin puts the dependency/artifact cache under the jobs workspace. This really isn't a good idea as every job would then download all of the projects artifacts taking up large amounts of space. At the same time, I also needed to setup the proxy information for gradle, which sadly doesn't reuse the jenkins proxy information.<br />
<br />
I was able to finally figure out a good place to define the gradle user home and proxy information in a single place to prevent each job from having to define it.<br />
<br />
Go into <b>Manage Jenkins</b> > <b>Configure System</b>. Under Global properties check Environment variables and fill in the following for name and value:<br />
<br />
<b>name</b>: GRADLE_OPTS<br />
<b>value</b>: -Dgradle.user.home=/home/tomcat/.gradle -Dhttp.proxyHost=101.10.10.10 -Dhttp.proxyPort=3128<br />
<br />
For the gradle.user.home property, I tried using ~/.gradle, but that didn't work which means most likely my $HOME environment variable was not set for whatever reason. My guess is it has something to do with all the <a href="http://answers.bitnami.com/questions/12799/setting-up-jenkins-on-ec2-with-github-private-repository">troubles</a> I've had lately using the <a href="http://bitnami.com/stack/jenkins/cloud/amazon">bitnami jenkins amazon ami</a>. I also tried setting the environment variable GRADLE_USER_HOME, but that didn't seem to work. Either way, hopefully this will help others.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-20247324991076421362013-06-25T15:34:00.000-05:002013-06-25T15:39:14.230-05:00Resource Filtering with GradleMy team has recently started a new Java web application project and we picked <a href="http://www.gradle.org/">gradle</a> as our build tool. Most of us were extremely familiar with <a href="http://maven.apache.org/">maven</a>, but decided to give gradle a try.<br />
Today I had to figure out how to do resource filtering in gradle. And to be honest it wasn't as <a href="https://twitter.com/jlorenzen/status/349539282718965760%20">easy</a> as I thought it should be; as least coming from a maven background. I eventually figured it out, but wanted to post my solution to make it easier for others.<br />
<br />
<b>What is Resource Filtering?</b><br />
First, for those that may not know, what is resource filtering? It's basically a way to avoid hard coding values in files and make them more dynamic. For example, I may want to display the version of my application in my application. The version is usually defined in your build file and this value can be injected or replaced in your configuration file during assembly. So I could have a file called config.properties under src/main/resources with the following content:<br />
application.version=${application.version}. With resource filtering the ${application.version} value gets replaced with 1.0.0 during assembly, then my application can load config.properties and display the application version.<br />
<br />
It's an extremely valuable and powerful feature in build tools like maven and one that I took advantage of often.<br />
<br />
<b>Resource Filtering in Gradle</b><br />
With this being my first gradle project, I needed to find the recommended way to enable resource filtering in gradle. My first problem I had to figure out was where to define the property. In maven this would typically be defined in the project's pom.xml file as a maven property:<br />
<br />
<pre class="brush: xml"><properties>
<application.version>1.0.0</application>
</properties>
</pre>
<br />
For gradle the appropriate place seemed to be the project's gradle.properties file. So you would add the following to your project's gradle.properties file (Note, I'm not suggesting you would hardcode the modules version in the gradle.properties file. Obviously the value would be derived from the version property in your project. I'm just using this for a simple example):<br />
<br />
application.version=1.0.0<br />
<br />
The next, and most difficult, problem I had to track down was how to actually enable resource filtering. I was hoping to just set some enableFiltering option and define the includes/excludes list, but that doesn't seem to be the case (extra tip: don't do filtering on binary files like images). I did find some resources online, but this <a href="http://gradle.1045684.n5.nabble.com/Filtering-Files-in-a-Build-td3371947.html">one</a> seemed to be the best approach. So you will need to add the following to your build.gradle file:<br />
<br />
<pre class="brush: groovy">import org.apache.tools.ant.filters.*
processResources {
filter ReplaceTokens, tokens: [
"application.version": project.property("application.version")
]
}
</pre>
<br />
Next you need to update your resource file. So put a config.properties file under src/main/resources and add this:<br />
<br />
application.version=@application.version@<br />
<br />
Note, the use of @ instead of ${}. This is because gradle is based on <a href="http://ant.apache.org/">ant</a>, and ant by default uses the @ character as the token identifier whereas maven uses ${}.<br />
<br />
Finally, if you build your project you can look under build/resources/main and you should see a config.properties file with a value of 1.0.0. You can also open up your artifact and see the same result.<br />
<br />
<b>Dot notation</b><br />
One thing to note is I typically use a period or dot to separate words for properties: application.version instead of applicationVersion. So you will notice the surrounding quotes around "application.version" in the build.gradle file. This is required as failing to surround the key by quotes will fail the build. Probably because groovy's dynamic nature thinks you are traversing an object.<br />
<br />
<b>Overriding</b><br />
I also investigated the best approach to overriding properties in gradle, as this appeared to be slightly different then how it's done in maven. In maven, properties can be overridden by properties defined in the user's setting.xml file or on the command line with the -D option. To override application.version in gradle on the command line I had to run the following:<br />
<br />
gradle assemble -Papplication.version=2.0.0<br />
<br />
If you want to override it for all projects you can add the property in your gradle.properties file under /user_home/.gradle.<br />
<br />
Also, if you are overriding the value via the command line and your property value contains special characters like a single quote, you can wrap the value with double quotes like the following to get it to work:<br />
<br />
gradle assemble -Papplication.version="2.0.0'6589"<br />
<br />
<b>Summary</b><br />
Well I hope this helps and if anyone from the gradle community sees a better way to perform resource filtering I'd love to hear about it. I'd also like to see something as important as resource filtering becoming easier to perform in gradle. I think it's crazy having to add an import statement to perform something so simple. jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-59734515246334526482012-12-13T23:10:00.000-06:002012-12-13T23:10:06.591-06:00Configuring CAS Externally using Spring ImportOver the past few months, my team has been working on integrating the <a href="http://www.jasig.org/cas">Jasig CAS</a> (Central Authentication Service) framework with our application. CAS is a feature rich enterprise Single Sign On (SSO) service. Most importantly, it's very flexible. We need this flexibility because we have several customers, each with different authentication requirements:. Some with and without Active Directory, and others needing PKI (X.509) authentication. Beyond that, we also need to support light weight options for local development. So the problem we had to solve was how to configure CAS externally so we could dynamically change authentication mechanisms without having to rebuild CAS.<br />
<br />
Let me first explain that the recommended way to extend CAS is using <a href="https://wiki.jasig.org/display/CASUM/Best+Practice+-+Setting+Up+CAS+Locally+using+the+Maven2+WAR+Overlay+Method">maven overlays</a>. It's real simple to modify the default behavior in CAS by copying the files from the CAS WAR into your own WAR directory. For example, most of the authentication configuration exists in /WEB-INF/deployerConfigContext.xml. To change the default authentication behavior, you copy this file to your WARs /WEB-INF directory and modify it to fit your requirements. This is the file that defines all the authentication handlers for things like LDAP (BindLdapAuthenticationHandler), X.509 (X509CredentialsAuthenticationHandler), and my personal favorite the simple username equals password (SimpleTestUsernamePasswordAuthenticationHandler) authentication handler.<br />
<br />
So the real problem became how can we enable the SimpleTestUsernamePassword authentication handler for local development and test servers, while disabling it for production systems? We wanted to avoid creating multiple CAS WARs containing configuration for their specific purpose: cas-ldap, cas-x509, cas-simple. Also, we wanted to avoid having a single CAS WAR that contained all methods of authentication. Ironically the solution was really simple and is actually used in other places in CAS.<br />
<br />
<b>Spring Import</b><br />
The solution was <a href="http://static.springsource.org/spring/docs/2.0.x/reference/beans.html">spring imports</a> (section 3.2.2.1). This was a feature I was not previously familiar with, but now think is one of the coolest features in spring because it lets you change the behavior of the system without having to rebuild. Spring lets you load in other bean xml files via the spring import tag. And since CAS uses spring, it was easy to modify CAS to be configured externally.<br />
<br />
<b>Configuring CAS Externally</b><br />
First, I assume you have already setup your maven project that is performing the maven overlay. Once you have built your project do the following:<br />
<ul>
<li>copy <br />/cas/target/war/work/org.jasig.cas/cas-server-webapp/WEB-INF/deployerConfigContext.xml<br />to<br />/cas/src/main/webapp/WEB-INF</li>
<li>You'll also need to copy this same file to your application servers classpath for your application. For us this would be under the jboss conf directory: /jboss/server/default/conf. I also renamed this file to custom-deployerConfigContext.xml to avoid confusion.</li>
<li>Open the file /cas/src/main/webapp/WEB-INF/deployerConfigContext.xml</li>
<li>Remove all the bean tags in between the beans tag. This leaves you with nothing but the beans tag with nothing in between.</li>
<li>Paste in the following spring import tag:<br /><import resource="classpath:custom-deployerConfigContext.xml"/></li>
<li>Save your changes, rebuild and redeploy your CAS WAR</li>
</ul>
That's it. Now you can edit the custom-deployerConfigContext.xml and add/remove the SimpleTestUsernamePasswordAuthenticationHandler, or any other authentication handlers, without having to rebuild your CAS WAR.<br />
<br />
<b>Final Thoughts</b><br />
Hopefully, you learned a little about how to configure CAS externally and a way to use springs import ability. For the most part this works great, but it isn't 100% fool proof. There are several CAS configuration files, so for the more complex authentication scenarios, you might have to modify the cas-servlet.xml or login-webflow.xml files. For example, if you want to do <a href="https://wiki.jasig.org/display/CASUM/X.509+Certificates">X.509 authentication</a>, you have to include a bean in the cas-servlet.xml file and modify the login-webflow.xml file. But you could apply the same concepts to these files as well as cas-servlet.xml is just another spring file and login-webflow.xml is a spring webflow file that supports the bean-import tag.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-5105355433884720702011-04-05T23:11:00.006-05:002011-04-05T23:22:50.824-05:00What's missing from our REST Services?While reading the excellent book <a href="http://www.amazon.com/Restful-Web-Services-Leonard-Richardson/dp/0596529260">RESTful Web Services</a> I discovered something. A technique used on the web since its beginning. <a href="http://apiwiki.twitter.com/w/page/22554679/Twitter-API-Documentation">Twitter</a> and <a href="https://developers.facebook.com/docs/reference/api/">Facebook</a> seem to use it. Yet it was absent in the REST Services I had been developing and if I had to guess most REST developers aren't including it either. What is so valuable the Internet would be useless without it? The answer is <a href="http://en.wikipedia.org/wiki/Hypermedia">hypermedia</a>.<br />
<br />
Hypermedia is the technical term used to describe how web pages (resources) are linked or connected. The author of RESTful Web Services calls it <i>connectedness</i>: how the server guides the client from one application state to another by providing links. A good example of a well-connected application is <a href="http://www.wikipedia.org/">Wikipedia</a>. It's very powerful when you could pick a random page and click through every entry on Wikipedia without ever editing your browsers URL. Performing a search on Google is another great example, as it wouldn't be very useful if the search result didn't include any links. Without these links, the client must know and create predefined rules to build every URL it wants to visit.<br />
<br />
So what do links on a website have to do with REST Services? REST was built on the foundation of the web and just because your not returning HTML doesn't mean you shouldn't create relationships between your resources. In fact, it's amazing how powerful embedding links in your responses can be. For instance, it prevents tight coupling between your clients and services as the clients don't have to construct or even know the exact URLs because the services are providing them.<br />
<br />
Let's use Twitter as an example. Assume the following is the URL to get the 20 most recent tweets for my account:<br />
<br />
http://api.twitter.com/1/statuses/user_timeline.json?screen_name=jlorenzen<br />
<br />
And here is a shortened <span class="hw">fictitious response:</span><br />
<br />
<pre class="brush: js">[
{
"text": "Finished watching Battlestar Galatica"
"id": "55947977415667712"
"url": "http://api.twitter.com/1/statuses/show/55947977415667712.json"
},
{
"text": "Started watching Battlestar Galatica"
"id": "45947977415667712"
"url": "http://api.twitter.com/1/statuses/show/45947977415667712.json"
}
]
</pre><br />
<span class="hw">Notice each tweet includes a direct URL. Now clients can use that URL value verses constructing it, and if it changes in the future, clients don't have to make any changes.</span><br />
<br />
<span class="hw"><b>Example using Jersey</b></span><br />
<span class="hw">So what is the best way to include links in your REST Services? Since I use <a href="http://jersey.java.net/">Jersey</a> on a daily basis, I'll go ahead and show an example of how to embed links in your responses using Jersey. Most likely, any REST framework is going to provide the same kind of features.</span><br />
<br />
<span class="hw">Using the twitter example above, the User Status Service with links may look something like this (using groovy):</span><br />
<br />
<pre class="brush: groovy">import javax.ws.rs.GET
import javax.ws.rs.Path
import javax.ws.rs.Produces
import javax.ws.rs.QueryParam
import javax.ws.rs.core.Context
import javax.ws.rs.core.UriInfo
import com.test.UserStatuses
@Path("/statuses")
class StatusesResource {
@Context
UriInfo uriInfo
@GET
@Path("/user_timeline")
@Produces(["application/json", "application/xml"])
def UserStatuses getUserTimeline(@QueryParam("screen_name")String screen_name) {
def statuses = getStatuses(screen_name)
statuses.each {
it.url = "${uriInfo.baseUri}statuses/show/$it.id"
}
return new UserStatuses(statuses: statuses)
}
}
</pre><br />
In this simple example, Jersey injects the UriInfo object which we use to get the baseUri of the request. It's really that simple.<br />
<br />
<span class="hw"> </span><br />
<b><span class="hw">Potential Issues</span></b><br />
<span class="hw">Well for some it may not be that simple. For example, we discovered an issue when using a reverse proxy (apache httpd). In our production environments, we typically setup apache on port 80 to proxy a localhost JBoss on port 8080. Unfortunately, in this setup the UriInfo.getBaseUri() returns localhost:8080 and not the actual original URL the client used; which is obviously not good. Now if you don't use a reverse proxy then no worries. However, if you do or might potentially in the future, a easy solution seems to be to set the Apache Proxy module option <a href="http://httpd.apache.org/docs/2.0/mod/mod_proxy.html#proxypreservehost">ProxyPreserveHost</a> to On. Setting this to On and restarting Apache seems to fix the issue.</span><br />
<br />
<b><span class="hw">JSONView Firefox Plugin</span></b><br />
<span class="hw">Once you've started embedding URLs in your REST responses, you might find it useful to install the <a href="https://addons.mozilla.org/en-US/firefox/addon/jsonview/">JSONView</a> firefox plugin. It's got some really slick features like formatting the JSON and creating clickable links for URLs.</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHGAvVPGs9QR9B3a_fpBZXhQdOnoI6vx2dpMipcg_qvsyLXK9Z3YXBEXQMgXxcF-XDANtp6QBa9AOnvpzRDsRGnwf1gY4aVvLwA_XuJXhng33quJKIAnD3svGBBuI0PkVmmL91DYptoQQV/s1600/TwitterService-JSONView2.png" imageanchor="1"><img border="0" height="186" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHGAvVPGs9QR9B3a_fpBZXhQdOnoI6vx2dpMipcg_qvsyLXK9Z3YXBEXQMgXxcF-XDANtp6QBa9AOnvpzRDsRGnwf1gY4aVvLwA_XuJXhng33quJKIAnD3svGBBuI0PkVmmL91DYptoQQV/s400/TwitterService-JSONView2.png" width="400" /></a></div>jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-88497598305601317522010-10-01T13:07:00.000-05:002010-10-01T13:07:09.328-05:00How to Change Extjs PieChart ColorsWhen using Sencha's, or extjs, charting capability, most likely your going to want to change the default color scheme. I was faced with this issue today and it's not documented as well as you'd expect. I had to piece together a few articles and I'm still not 100% it's the right way as it's using an undocumented config option. But I wanted to document how I did get it to work to same others some time.<br />
<br />
<a href="http://www.sencha.com/">Sencha</a> is very well documented and their <a href="http://dev.sencha.com/deploy/dev/examples/">examples</a> are great. Here is the <a href="http://dev.sencha.com/deploy/dev/examples/chart/pie-chart.html">Pie Chart example</a> we are going to be updating (using version 3.2.1). For those that don't know, extjs charts are based off of Yahoo's <a href="http://developer.yahoo.com/yui/charts/">YUI charts</a> which is also requires Flash. Not only is it pretty simple to create charts with extjs, but we already are using extjs so we decided to prototype some charts using it.<br />
<br />
Here is the code that produces a simple basic PieChart:<br />
<pre class="brush: js">new Ext.Panel({
width: 400,
height: 400,
title: 'Pie Chart with Legend - Favorite Season',
renderTo: 'container',
items: {
store: store,
xtype: 'piechart',
dataField: 'total',
categoryField: 'season',
extraStyle:
{
legend:
{
display: 'bottom',
padding: 5,
font:
{
family: 'Tahoma',
size: 13
}
}
}
}
}); </pre><br />
This produces the following <a href="http://dev.sencha.com/deploy/dev/examples/chart/pie-chart.html">PieChart</a>.<br />
<br />
To change the default colors provide a series config option:<br />
<pre class="brush: js; highlight: [11, 12, 13, 14, 15]">new Ext.Panel({
width: 400,
height: 400,
title: 'Pie Chart with Legend - Favorite Season',
renderTo: 'container',
items: {
store: store,
xtype: 'piechart',
dataField: 'total',
categoryField: 'season',
series: [{
style: {
colors: ["#ff2400", "#94660e", "#00b8bf", "#edff9f"]
}
}],
extraStyle:
{
legend:
{
display: 'bottom',
padding: 5,
font:
{
family: 'Tahoma',
size: 13
}
}
}
}
});</pre><br />
The chart it produces may not look the most appealing but at least we figured out how to change the colors.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img border="0" height="187" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqI4XUkSeoRx9oeXoCcNHiACDmLFQ6e6wOZ6V01trlNjQg2m1JXdzEbFIRT0gJZzDdpoa2qikALOHvRJwEc7Lcbsb17Mn_xl1Phnl7WOafuGjYlLAmP-eTl86FNtW0Ulw6gdN0rvcbsjA2/s200/piechart-newcolors.png" style="margin-left: auto; margin-right: auto;" width="200" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Pie Chart with New Colors</td></tr>
</tbody></table><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqI4XUkSeoRx9oeXoCcNHiACDmLFQ6e6wOZ6V01trlNjQg2m1JXdzEbFIRT0gJZzDdpoa2qikALOHvRJwEc7Lcbsb17Mn_xl1Phnl7WOafuGjYlLAmP-eTl86FNtW0Ulw6gdN0rvcbsjA2/s1600/piechart-newcolors.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a></div><br />
<br />
Again, I'm not certain this is the best way to accomplish this as the series config option isn't <a href="http://dev.sencha.com/deploy/dev/docs/output/Ext.chart.PieChart.html#Ext.chart.PieChart-configs">documented in 3.2.1</a>. But by piecing together this <a href="http://www.sencha.com/forum/showthread.php?91476-Pie-Chart&highlight=chart+change+colors">PieChart question</a> and this <a href="http://miamicoder.com/post/2009/10/Custom-Markers-for-Your-Ext-JS-Charts.aspx">article</a>, I was able to figure it out.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-67800994838758069802010-08-19T21:49:00.000-05:002010-08-19T21:49:20.212-05:00Missing Office Equipment PrankHere is Part 2 of my <a href="http://jlorenzen.blogspot.com/2010/08/real-man-of-genius.html">series</a> where I reminisce about the good times I had at Williams. As I stated previously, I came across these printed off emails in our attic while moving to our new home. I'm so glad I printed them off. #goodtimes<br />
<br />
Anyway, this was a prank Keith Stanek, Josh Guthman (Guthy), and myself pulled on Michael Brotherman (Are you Miking Me?). Before showing you the email though I need to set the stage. The year was 2003, and our company had just gone through 3 rounds of layoffs. Morale was pretty low. We worked on the 32nd floor of the 52 story BOK building in downtown Tulsa, Oklahoma. During fire drills we had to take the stairs down a floor. During one of the fire drills we noticed the entire 31st floor was void of humans, but a lot of very nice unused office furniture and white boards were still present. We joked around about repurposing some of the nicer equipment, but nothing ever come of it. Until one morning, we noticed Mike had a new chair. A very nice new chair. We gave him junk about it all day, but decided a prank would be better. So we had Josh Guthman (Guthy) write us up a believable email that Keith and I would spoof using the Facilities email address. Here it is:<br />
<br />
<b>From</b>: Facilities Services-Tulsa<br />
<b>Sent</b>: Thursday, March 13, 2003 7:39AM<br />
<b>Subject</b>: Missing Office Equipment<br />
<br />
During a recent audit for unused office equipment we discovered black reclining office chairs were missing from several of the floors in the BOK tower. Upon further investigation we discovered those chairs had been procured by current employees looking to upgrade from their existing chairs. While facilities appreciates your desire to utilize existing assets instead of requisitioning new ones, please remember that Williams is undergoing a cost savings campaign at this time and that all unused equipment needs to remain on its respective floor for proper accounting and redistribution either to the lessee or to an employee in need. If we have not already reacquired your chair, please return it to the floor from which it came by the end of business Friday, March 14.<br />
<br />
Mike smelled it out, but noone ever confessed, and I'm pretty sure deep down, even though his gut was telling him it was a prank, he still didn't want to take the chance in the possibility that it might have been true. The best part about it was the morning we sent the email, Mike and I had a group breakfast meeting with one of the Senior Executives. As we were going down the elevator, he mentioned to me that he was going to bring it up during the meeting, and with a complete straight face I called his bluff and told him that I think he should, knowing that he wouldn't anyway; and if he did it would only make the prank that much better.<br />
<br />
Mike returned the chair.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-518725405119703552010-08-18T21:28:00.000-05:002010-08-18T21:28:45.664-05:00Real man of GeniusI usually keep this blog professional, but I'm in the process of moving and came across some emails that I just have to share and they are too long to post on facebook. All 3 emails where from my time at The Williams Company in Tulsa, OK where I had the privilege to work with some great friends. People like Keith Stanek, Michael Brotherman (Are you Miking me?), Jason Randall, Josh Guthman (Guthy), Erin Nylund, Becca Fairchild, Jennifer Brandt, and a host of others. I was so fortunate to work with these people and I'm not sure it could ever be duplicated. We had a ton of fun together. During off hours we played a lot of cards (Rook mainly), Starcraft I, and lots and lots of Halflife. Over time, I got to be pretty good at Halflife (HL). At some point we started playing 2 (Keith and Mike) on one and I still would usually win.<br />
<br />
Now to the email. Wednesday August 20, 2003 at 9:45 AM, Michael Brotherman emailed Keith and me his Real man of Genius in my honor. It was done just like the <a href="http://budlight.whipnet.com/">Real Men Of Genius commercials</a>. Here it is in it's original form:<br />
<br />
Mr real man of genius...<br />
We solute you, Mr. Total Conqueror in HL.<br />
You who drops bombs that are not in the bathroom.<br />
Who shoots your double barrel in our face.<br />
<br />
Your eyes keep us from getting caught,<br />
But your play keeps us from coming back.<br />
<br />
Here's to you...Mr crossbow no miss,<br />
Mr. Ray gun who makes it no fun.<br />
<br />
We solute you.<br />
<br />
<br />
My Favorite weapon was the crossbow. Oh good times. Stayed tuned. I've got 2 more coming.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-89378408185439862010-08-14T23:20:00.000-05:002010-08-14T23:20:28.667-05:00Hash tags in Commit CommentsI've been using Yammer, Twitter, and Facebook consistently for awhile now. One of the things I really like are hash tags, where yams or tweets include additional meta information in the comment such as #groovy, #hudson, or #maven. One of the main purposes of hash tags, is it allows others to subscribe to an area of interest verses subscribing to hundreds of individual people. Another purpose it serves, is determining interest value; sort of like a subject heading. Since hash tags are typically at the end of a tweet or yam, I usually read the end first before I commit to reading the whole yam or tweet. I don't follow a ton of people yet, but I do consume a lot of information in a day and in order to find the good I have to wade through the bad. Using hash tags aides in this process.<br />
<br />
I also think hash tags could help in another area: commit comments. It's something important I've <a href="http://jlorenzen.blogspot.com/2010/02/commit-comments-conversation-with-your.html">mentioned</a> before, and I think hash tags can be useful in commit comments even if there aren't any tools yet to mash it up. A few days ago, our of habit I accidentally started including some high level hash tags in an svn commit comment and it occurred to me that it might be useful to others, if not myself in 6 months. If we find hash tags useful in yams and tweets, why not commit comments?<br />
<br />
Including tokens in commit comments isn't new. In fact, we already include a Jira number in most of our commit comments and this allows us to view all the commits for a Jira issue. There is even a <a href="http://blogs.atlassian.com/developer/2009/10/dragon_slayer_supplement_action_issues_with_commit_commands.html">Jira plugin</a> that allows you to perform actions by specifying hashes in commit comments. For example, if I want to resolve a Jira I can include #resolve in my commit comment, and Jira will automatically Resolve that Jira. And don't feel like you can't include the #resolve tag only if your using that jira plugin. I could see value in seeing a #resolve tag in the final commit of a Jira.<br />
<br />
As an example, here is the exact commit comment I used that includes some hash tags for geoserver and installer.<br />
<br />
"<i>Jira: AC-4207. Got the filtered geowebcache.xml file correctly moved to the production and staging data directories. These files point to localhost with the correct geo port and stage geo port. Also commented out some fixpath.cmd lines to get the installer to work. Finally, I also change the ProcessPanel to not have a condition: <panel classname="ProcessPanel" condition="env.install"> changed to <panel classname="ProcessPanel">. This should allow us to be really selective in what we install and still allow the process panel to run, whereas before it wasn't running. #geoserver #installer</panel></panel></i>"<br />
<br />
Now the really cool part is if someone else in the near future notices an issue with geoserver in our installer, this comment will stick out more than a comment without those hashes.<br />
<br />
Another cool thing that could be done is a team subscribing to certain hash tags in the svn commit emails. For example, someone responsible for peer reviewing all DAO changes could subscribe to a hash like #dao. Then when developers are modifying DAO's all they need to do is include the #dao tag.<br />
<br />
I guess what I am saying is perhaps we could also benefit from putting extra hash tags in our commit comments. My brain has already been trained to read them so personally I think it's useful.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-76174627544524102012010-07-25T21:09:00.001-05:002010-07-25T21:09:42.471-05:00Restoring a Clonezilla Image using VirtualBoxUbuntu <a href="http://www.ubuntu.com/desktop">10.04</a> has been out for a few months, and I'm still on 09.10. I have had some success in the past upgrading, but I still prefer doing fresh installs. I guess it comes from my windows days, when an occasional fresh install was good for the computer soul. However, this time I'm also starting a new project at work doing .net instead of java, and I really wanted the ability to "come back" to my old setup. Basically, I wanted to convert my host machine to a virtual one or what's called P2V (Physical to Virtual). I tried <a href="http://www.vmware.com/products/converter/">VMware Converter</a> but didn't get very far. With some advice from several co-workers though, I did come up with a method that did work and it was fairly easy.<br />
<br />
The basic steps are:<br />
<ul><li>Use <a href="http://clonezilla.org/">Clonezilla</a> to <a href="http://clonezilla.org/clonezilla-live/doc/showcontent.php?topic=01_Save_disk_image">Save a Disk Image</a> to a external USB drive. This essentially clones my host machine, where I can restore it later. My hard drive is around 120GB, so I put the image on my 500GB external USB drive. This took about 1.5 hours.</li>
<li>Create a new virtual machine on another external USB drive. The nice thing about using Clonezilla, is for this step you can use VMware or <a href="http://www.virtualbox.org/">VirtualBox</a>. I used VirtualBox. Obviously, you can't create the virtual machine on your laptop because you don't have enough space. And you can't use the same USB drive because when restoring, Clonezilla needs it to be unmounted. So instead I used another external USB drive.</li>
<li>Start the new virtual machine and boot up Clonezilla to begin <a href="http://clonezilla.org/clonezilla-live/doc/showcontent.php?topic=02_Restore_disk_image">restoring the image</a>. You need to change the mode because the default view doesn't work very well when restoring. So at the Clonezilla menu, choose "Other modes of Clonezilla live". Then choose "Clonezilla live (Safe graphic settings, vga=normal)".</li>
<li>When you get to the point where Clonezilla needs to point to the external USB drive that contains the Clonezilla image, remember to enable the USB drive in VirtualBox. To do this, go to the Devices menu option in your virtual machine and select USB Devices and check the appropriate USB drive. Restoring my 120GB image took about 24 hours so make sure to do it when you have time.</li>
<li>Once Clonezilla has finished restoring the image your ready to poweroff the virtual machine, remove the clonezilla CD, and restart.</li>
</ul>I still had a few adjustments to make in order to get it to work. When I first started my virtual machine, it complained about not having PAE (Physical Address Extension) enabled. I enabled PAE in ubuntu about a <a href="http://twitter.com/jlorenzen/status/17982257006">month ago</a> so I could use all 4GB of RAM. Fixing it was easy. Under your machines settings go to System and click on the <u>P</u>rocessor tab. Check the "Enable PAE/NX" checkbox and restart.<br />
<br />
Once it booted up, it complained about my graphics configuration. I tried selecting "Reconfigure Graphics", but that didn't work. Instead I was able to get passed it by selecting "Run in low graphics for one session". This allowed me to finish booting where I installed the virtualbox guest additions which seemed to solve the graphics issue.<br />
<br />
That is all there is to it. It was all rather easy. Now I can install Ubuntu 10.04 and have the ability to go back to my previous development environment. I could also see lots of different use cases for this. Combined with the ability to <a href="http://joshuahoover.com/2010/04/01/cloning-virtualbox-images/">clone virtual machines</a>, all your virtual needs are met.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-45659975547184195242010-07-25T00:09:00.001-05:002010-07-25T00:10:23.737-05:00Upgrading to Maven 3I've been playing around with <a href="http://maven.apache.org/">maven 3</a> lately on our legacy maven 2 multi-module project via <a href="http://jlorenzen.blogspot.com/2010/03/how-to-speed-up-maven.html">mvnsh</a>. Like advertised, maven 3 is backwards compatible with maven 2. In fact, most everything worked out of the box when switching to maven 3. In this post, I'm going to highlight the required and currently optional items I changed so you can start preparing to migrate your project to maven 3. But first, what's so special about maven 3 and why would you upgrade? <a href="http://polyglot.sonatype.org/">Polyglot maven</a>, <a href="http://shell.sonatype.org/">mvnsh</a>, and <a href="http://raibledesigns.com/rd/entry/what_s_new_in_maven">improved performance</a> (50%-400% faster) are just a few. And since it's so easy to migrate to maven 3, you really don't have any excuses.<br />
<br />
Currently, I build our project using maven 2.2.1. This article was tested with mvnsh 0.10 which includes maven 3.0-alpha-6. The current release of maven 3 is 3.0-beta-1, while <a href="http://raibledesigns.com/rd/entry/what_s_new_in_maven">maven 3.1</a> is due out Q1 of 2011. <br />
<br />
<b>Profiles.xml no longer supported</b><br />
I haven't really figured out the reasoning, but it doesn't really matter; maven 3.0 no longer supports a profiles.xml. Instead you place your profiles in your ~/.m2/settings.xml. Some of our database processes and integration tests require properties from our profiles.xml. It was simple to solve by just moving my profiles to my settings.xml and everything worked.<br />
<br />
<b>Upgrade GMaven Plugin</b><br />
We depend pretty heavily on the <a href="http://jlorenzen.blogspot.com/2008/01/maven-groovy-plugin-example.html">gmaven plugin</a> for testing, simple groovy scripts, and some ant calls. In order to build some modules I had to upgrade gmaven. The current version we were using was 1.0-rc-3. Our projects built perfectly after changing it to org.codehaus.gmaven:gmaven-plugin:1.2.<br />
<br />
<b>${pom.version} changing to ${project.version}</b><br />
Here maven 3 kindly warned me that uses of the <a href="http://docs.codehaus.org/display/MAVENUSER/MavenPropertiesGuide">maven property</a> pom.version may no longer be supported in future versions and should be changed to project.version. My modules still built, but thought it was nice of maven to inform me of the potential change.<br />
<br />
<b>Version and Scope Issues</b><br />
We had a few places where we needed to define a dependency version and another place where we shouldn't have defined a scope. Both instances prevented maven 3.0 from building our modules, but fixing them was easy. The first instance was we defined a version for a plugin in the pluginManagement section, but maven 3 required it also where it was used in the reporting plugin section. Not exactly sure about this one, ideally you would only define your plugin versions in the pluginManagement section but oh well.<br />
<br />
We had some WAR projects using jetty. In the jetty plugin definition we had a dependency on geronimo and had defined a scope of provided. Maven 3 complained about it and since it's really not necessary, just removing it fixed the issue.<br />
<br />
<b>modelVersion</b><br />
Maven 3.0 kept warning about using ${modelVersion} instead of ${project.modelVersion}. I was still able to build though, so my guess is the value for modelVersion, 4.0.0, most likely will change when maven 3.1 comes out.<br />
<br />
<b>Weird Surefire Output</b><br />
This wasn't necessarily an issue with the surefire plugin, but I wanted to comment about it's output when tests failed as I thought it might have been a maven 3 issue. Below is a screenshot of the output when you have failed tests. At first I thought it was a maven 3 issue, but I built the same project using the same commands with maven 2.2.1 and got the same test failures. Hopefully, they can clean this type of thing up, because I could image lots of people getting confused.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUaPE5cbmG617Bdok2VEP3JZzNxX4ihK5xx8w1QYrtwjZ5XycqM6kpCZgxrZiEqHZsadFLY0Gw4lYsvGml2WQHYAMq7DdMqLm35Rw-vp4Ai5FwFH_wHxo5zQU8r21TTiTk7B1f4VTlK0GC/s1600/maven3-test-failure.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="168" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUaPE5cbmG617Bdok2VEP3JZzNxX4ihK5xx8w1QYrtwjZ5XycqM6kpCZgxrZiEqHZsadFLY0Gw4lYsvGml2WQHYAMq7DdMqLm35Rw-vp4Ai5FwFH_wHxo5zQU8r21TTiTk7B1f4VTlK0GC/s640/maven3-test-failure.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Failed test output</td></tr>
</tbody></table><br />
That's essentially it. Happily, there really wasn't much required to change, which goes to show the great lengths the maven team has gone through to ensure backwards compatibility. Finally, here is a <a href="https://cwiki.apache.org/MAVEN/maven-3x-compatibility-notes.html">Compatibility Notes</a> maven has provided on the subject of migrating maven 2 projects to maven 3.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-8981428296597657762010-07-19T00:38:00.001-05:002010-07-19T00:39:44.285-05:00My First Groovy DSLIt's no secret I'm a groovy <a href="http://jlorenzen.blogspot.com/2007/12/justify-using-groovy.html">homer</a>. I love it. One of the things that makes using groovy so fun, is it's syntax. Being able to get the contents of a file by just saying new File("/home/james/test.log").text is refreshing compared to it's java counterpart. Another thing that makes groovy enjoyable is it's ability to support Domain Specific Languages (DSL). <a href="http://groovy.codehaus.org/Creating+XML+using+Groovy%27s+MarkupBuilder">MarkupBuilder</a> is a great example. With Groovy, you can create simple or very complex DSLs for your purposes. To my knowledge there are a few ways you can create your own DSL: extending <a href="http://thediscoblog.com/2007/07/06/builders-are-groovys-bag/">BuilderSupport</a> or using <a href="http://groovy.codehaus.org/Using+methodMissing+and+propertyMissing">methodMissing/propertyMissing</a>. In my opinion, extending BuilderSupport is more involved while methodMissing/propertyMissing is kind of the poor man's way of creating a DSL.<br />
<br />
Up to this point though, I've never actually came across a good use case for creating a DSL until this past week. We have a large set of <a href="http://jlorenzen.blogspot.com/2009/01/testing-rest-services-with-groovy.html">automated tests</a> that run against our REST Services. Since our application is now multi-tenant, all of our tests need a valid organization (tenant). In our case, an organization contains multiple roles and locations. Each test has different requirements on the types of organizations it needs. Some might need 2 unique organizations, while another might need an organization with at least 2 roles and 2 locations. It was this use case that I thought a groovy DSL would fit perfectly.<br />
<br />
My end goal was to have something like this:<br />
<br />
<i>def orgs = OrganizationService.getOrganizations().withRoles().withLocations()</i><br />
<br />
This would return a list of organizations that had at least 1 role and 1 location. The nice thing about this DSL is it's scalable. Meaning, if we add new lists of information to an organization, we won't have to update our class. Also, an important feature, is the method name Roles and Locations correlate to the JSON named arrays of the organization. So my JSON looks something like this:<br />
<br />
<i>{"organizations": {"name": "James", "roles": ["R1", "R2"], "locations": ["Tulsa", "Omaha"]}}</i><br />
<br />
When writing my DSL I decided to go the poor man's way and use the methodMissing approach combined with the <a href="http://www.ibm.com/developerworks/java/library/j-pg08259.html">@Delegate</a> annotation. Here it is:<br />
<br />
<pre class="brush: groovy">import net.sf.json.JSONArray
class OrganizationFilterArray {
@Delegate private JSONArray array
OrganizationFilterArray(array) {
this.array = array
}
def methodMissing(String name, args) {
if (name.startsWith("with")) {
def length = (args.length == 0) ? 1 : args[0]
def arrayName = name[4..5].toLowerCase() + name[6..-1]
return filterByLength(arrayName, length)
} else {
throw new MissingMethodException(name, this.class, args)
}
}
private filterByLength(listName, length) {
def filteredArray = array.findAll {
it."$listName"?.size() >= length
}
return new OrganizationFilterArray(filteredArray)
}
}
</pre><br />
I could have just as easily extended <a href="http://json-lib.sourceforge.net/apidocs/index.html?net/sf/json//class-useJSONArray.html">JSONArray</a> since it's not final, but I was following the @Delegate guide initially and just thought it was an interesting alternative. The big key here is how I used methodMissing to support an infinite amount of possibilities with how to filter an organization. Everything else I think is pretty self explanatory. When it comes across a method that is missing, withRoles(), it calls my methodMissing method. From there I filter out all the organizations that don't fit the criteria. Eventually, this class could be refactored to support more than just the size of an array. Note, I did have to upgrade the <a href="http://docs.codehaus.org/display/GMAVEN/Home">gmaven plugin</a> version to 1.0 to get it work in our maven project.<br />
<br />
I knew from the beginning I wasn't going to use BuilderSupport. It did take me some time to figure out how I was going to support filtered (getOrganizations().withRoles()) and non-filtered versions (getOrganizations()). That is when I decided to extend List or JSONArray, as both method calls had to return my custom List/JSONArray. Overall, I'm pretty happy with the outcome and how long it took me. It was pretty trivial and very fun thanks to groovy.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-48683145639917203882010-07-14T23:21:00.000-05:002010-07-14T23:21:38.743-05:00Tip Debugging External Java DependenciesEver spent time debugging 3rd party java libraries? <a href="http://java.decompiler.free.fr/">Decompiling</a> is usually the first step. Attempting to walk through the code can be tedious but it's usually the first line of defense. But what if you want to deploy a slightly modified version? In the past, I've checked out the project and built it with my modifications. Since most open source projects don't support "<a href="http://jlorenzen.blogspot.com/2010/04/thoughts-on-fowlers-continuous.html">virgin builds</a>", this has a success rate of about 10%. Fortunately, there is a better way. I'm just disappointed I didn't think of it.<br />
<br />
In our project we deploy a wiki that is based on <a href="http://www.jspwiki.org/">JSPWiki</a> using <a href="http://jlorenzen.blogspot.com/2008/05/using-maven-war-overlays-to-extend.html">maven overlays</a>. In the version we are using, there isn't any support for being able to configure the wiki files directory outside of a properties file in the WAR. In order to point JSPWiki to a different directory, you would basically have to unzip the WAR, update the file, and then zip the WAR back together (#fail). So, someone on our team discovered we could basically override this behaviour by providing our own implementation of the same class.<br />
<br />
To be more specific, the class under question is <i>com.ecyrd.jspwiki.PropertyReader</i>. It's included in the JSPWiki.jar file under /WEB-INF/lib. It's default behaviour is not suitable for our needs, so we get an original copy of <i>PropertyReader.java</i>, and place it under our maven projects <i>/src/main/java</i> directory under the same package of <i>com.ecyrd.jspwiki</i>. Once the projects builds, we now have our version of <i>PropertyReader.class</i> under /WEB-INF/classes, which is important because the ClassLoader will first look under /WEB-INF/classes first before looking in /WEB-INF/lib. This means our class is used instead of the one provided by JSPWiki in /WEB-INF/lib/JSPWiki.jar.<br />
<br />
Now I know what your thinking: that's a horrible idea James. And for the most part I agree, but it's not my fault this ability doesn't already exist in JSPWiki. So if you want to keep your conscience clean, go ahead and continue unpacking and repacking that WAR. I'll be happy getting important things done. Obviously, practicing this is the exception and not the rule. And one should provide the patch as an improvement back to the 3rd party for all to enjoy. And before you start asking yourself why you can't just extend the real PropertyReader and override the necessary methods, which I agree would be more ideal, it's not possible because you'd basically be extending yourself since the modified class is the first class in the classpath.<br />
<br />
This technique has actually helped me twice debug environment specific issues. It'd saved me a huge amount of time not having to build an external library. In fact, if you check out the exact version, you could even perform remote debugging with breakpoints.<br />
<br />
So next time you need to debug an external 3rd party library, consider using this technique before attempting to build it.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-58670875211107167562010-07-13T16:51:00.001-05:002010-07-13T16:52:48.291-05:00Avatar MavenToday I gave a quick <a href="http://www.slideshare.net/jlorenzen/avatar-maven">presentation</a> to some coworkers about <a href="http://maven.apache.org/">maven</a>. It's a broad topic, so I kept it fairly limited. Most of my audience was very familiar with maven, so I tried not boring them with stuff they already knew. I tried making it a little engaging by comparing the Avatar, master of all 4 elements, to Maven, master of the build (it's a stretch I know). It's a quick presentation (15 slides) providing some helpful maven tips, what's coming in <a href="http://jaxenter.com/maven-3-0-the-future-of-maven-10580.html">maven 3</a>, and <a href="http://ericmiles.wordpress.com/2010/03/26/maven-shell-features">mvnsh</a>. Hope you like it.<br />
<br />
<div id="__ss_4748255" style="width: 425px;"><b style="display: block; margin: 12px 0pt 4px;"><a href="http://www.slideshare.net/jlorenzen/avatar-maven" title="Avatar Maven">Avatar Maven</a></b><object height="355" id="__sse4748255" width="425"><param name="movie" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=avatarmaven-100713163552-phpapp02&stripped_title=avatar-maven" /><param name="allowFullScreen" value="true"/><param name="allowScriptAccess" value="always"/><embed name="__sse4748255" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=avatarmaven-100713163552-phpapp02&stripped_title=avatar-maven" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="355"></embed></object><br />
<div style="padding: 5px 0pt 12px;">View more <a href="http://www.slideshare.net/">presentations</a> from <a href="http://www.slideshare.net/jlorenzen">jlorenzen</a>.</div></div>jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-70004560187371416342010-07-09T13:55:00.002-05:002010-07-09T13:59:43.363-05:00Sharing Resources in MavenToday I needed to figure out the best way to share resources across multiple maven modules. We have previously done it 2 different ways, neither of which I thought were very good. The first way was using a relative path to reach across to the modules resource directory (usually not a good practice in maven). It went something like this:<br />
<pre class="brush: xml"><resources>
<resource>
<directory>../module1/src/main/resources</directory>
</resource>
</resources>
</pre><br />
The second way was using the infamous <a href="http://www.sonatype.com/people/2008/04/how-to-share-resources-across-projects-in-maven/">maven assembly plugin</a>. I typically avoid the assembly plugin like I avoid writing Assembly. Plus I prefer avoiding 100 extra lines of XML on something so trivial. Luckily, the Sonatype guys apparently knew this and have come up with a more efficient way of sharing resources using the <a href="http://maven.apache.org/plugins/maven-remote-resources-plugin">maven-remote-resources-plugin</a>. It has the advantages of requiring a lot less XML lifting and it's nicely integrated into the maven lifecycle. I did run into one small issue trying to get it to work. By default it only copies **/*.txt files from src/main/resources. For several minutes, I couldn't figure how why it wasn't working until I added an includes for **/*.xml. Then it worked perfectly. Here is the end result:<br />
<br />
<b>Creating a resource bundle</b><br />
Add the following to your POM which is going to create the resource bundle.<br />
<pre class="brush: xml"><plugin>
<artifactid>maven-remote-resources-plugin</artifactid>
<version>1.1</version>
<executions>
<execution>
<goals>
<goal>bundle</goal>
</goals>
<configuration>
<includes>
<include>**/*.xml</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>
</pre><br />
You now should see the following message in your mvn output while running mvn clean install.<br />
<br />
<i>[remote-resources:bundle {execution: default}]</i><br />
<br />
This produces a /target/classes/META-INF/maven/remote-resources.xml file which contains references to the resource files. For example,<br />
<pre class="brush: xml"><remoteresources>
<remoteresource>test.xml</remoteresource>
</remoteresources>
</pre>Consuming Resource Bundle<br />
Add the following to the POM which needs to consume the new resource bundle.<br />
<pre class="brush: xml"><plugin>
<artifactid>maven-remote-resources-plugin</artifactid>
<version>1.1</version>
<executions>
<execution>
<goals>
<goal>process</goal>
</goals>
<configuration>
<resourcebundles>
<resourcebundle>com.lorenzen:lorenzen-core:${pom.version}</resourcebundle>
</resourcebundles>
</configuration>
</execution>
</executions>
</plugin>
</pre><br />
You now should see the following message in your mvn output while running mvn clean install.<br />
<br />
<i>[remote-resources:process {execution: default}]</i><br />
<br />
You should now be able to look into your second modules /target/classes directory and see test.xml.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-67725345604659363442010-07-01T23:22:00.002-05:002010-07-01T23:25:44.420-05:00RSS, Lucene, and RESTSorry for the horrible title. I struggled trying to come up with a worthy title, but after a few minutes I decided to not let <a href="http://codebeneath.blogspot.com/2007/08/perfect-is-enemy-of-very-good.html">perfection get in the way of good</a>.<br />
<br />
My team has recently worked on a new feature I am pretty excited about: adding support for RSS/Atom in our application. I know your thinking so what. It's not really the what I am excited about but the how. What I'm really excited about was how the story was defined and implemented.<br />
<br />
<b>Approach</b><br />
We had the simple requirement from a newer customer to provide an RSS feed for newly created items. This actually wasn't the first time for this requirement. We prototyped a similar capability a long time ago using OpenESB and the RSS BC, but for <a href="http://jlorenzen.blogspot.com/2009/03/grails-create-app-esb.html">multiple reasons</a> it just didn't work out.<br />
<br />
So our first decision had to answer: how we were going to implement it......again, but better. Before the sprint began, a few of us got together and hashed out a potential solution: how about we use the Search REST Service, which is backed by <a href="http://lucene.apache.org/java/docs/index.html">Lucene</a>, to support <a href="http://lucene.apache.org/java/2_9_2/queryparsersyntax.html"><span id="goog_1808686975"></span>Advanced searches<span id="goog_1808686976"></span></a> and return RSS?<br />
<br />
Why does this excite me so much? To understand that I need to explain our application at a high level. It's a completely javascript-based application using ExtJS (now <a href="http://www.sencha.com/">sencha</a>), backed by REST Services using <a href="https://jersey.dev.java.net/">Jersey</a>. Consequently, we have a lot of REST Services. Right now those REST Services support returning XML or JSON using a custom Response Builder we have created internally.<br />
<br />
I'm excited because this single user story could have a huge improvement on the entire system:<br />
<ol><li>If we modified the Search Service to return RSS, then all our REST Services could support RSS.</li>
<li>The REST Service would now support Advanced searches. Previously, it only really supported basic keyword searches.</li>
<li>Any search they perform could now be subscribe to via RSS.</li>
</ol><b>Implementation</b><br />
I'm not going to go into every detail on how it was done. I wasn't even actually the one who implemented it (see <a href="http://eclectic-tech.blogspot.com/">Matt White</a>. He did a fantastic job.). We did have one major hurdle we had to overcome, and that was how to index items to enable <a href="http://lucene.apache.org/java/2_9_2/queryparsersyntax.html">advanced searches</a> like Status=New.<br />
<br />
Previously this wasn't possible given how we were indexing our items. We were basically indexing the item by building up a large String containing all the item information like the following:<br />
<pre class="brush: groovy">import org.apache.lucene.document.Document
import org.apache.lucene.document.Field
def Document createDocument(item) {
Document d = new Document()
doc.add(new Field("content",
getContent(item),
Field.Store.NO,
Field.Index.ANALYZED))
return d
}
def String getContent(item) {
def s = new StringBuilder()
s.append(item.getTitle()).append(" ")
s.append(item.getStatus()).append(" ")
s.append(item.getPriority()).append(" ")
s.append(item.getDescription()).append(" ")
return s.toString()
}
</pre><br />
The problem with this is performing a search for "New" would have returned any item with a status of New as well as any items that contained the word New. The solution was to just add another Field to the Document.<br />
<pre class="brush: groovy">doc.add(new Field("Status",
item.getStatus(),
Field.Store.NO,
Field.Index.NOT_ANALYZED));
</pre><br />
Now the Search Service could support advanced searches like: Status:"New". You should put the value in quotes in case the value contains spaces (ie Status:"In Progress"). And since Lucene is so powerful, it also means the follow search would work: Status:"New" AND Priority:"High" AND "Hurricane". Now users have the freedom to subscribe to a near limitless amount of RSS feeds based on Advanced Searches.<br />
<br />
<b>Start to Finish</b><br />
I think there were several reasons why this story was a success in my eyes. Most importantly where the two really smart co-workers who worked on it: Matt White and Chuck Hinson. All three of us knew of this user story ahead of time and we were able to discuss it technically days before backlog selection. This allowed us to brainstorm some ideas. Once we narrowed it down, we spent some more time separately looking into the code to find out the level of difficulty and if Advanced Searches like Status:New would be possible. Overall, together I'd say we spent 3-4 hours doing the preliminary work. Doing that preliminary work I think really enabled us to give a proper WAG for the story.<br />
<br />
I really can't speak for how the development went (I was at Disney World for 10 days with the family), but I was really impressed with the tests Matt wrote. He wrote a number of unit tests making sure advanced searches worked and basic searches still worked. On top of that, he wrote an overall functional test using <a href="http://jlorenzen.blogspot.com/2009/01/testing-rest-services-with-groovy.html">HttpBuilder</a> executing the REST Service just as our javascript client would. <br />
<br />
Finally, once the main work was finished, we uploaded a diff file to our internal instance of <a href="http://www.reviewboard.org/">Review Board</a>. From there I was able to perform a peer review where we found a minor bug in the changes.<br />
<br />
<b>Summary</b><br />
I am sure it's not an original idea, but I thought it was a fun User Story that hopefully will provide a lot of value beyond what was originally estimated. Ideally, this might help others who are in similar situations.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-20385678671228764672010-05-14T22:40:00.000-05:002010-05-14T22:40:03.794-05:00How to create a release without the maven2 release pluginOne of the most referenced articles I have written is "<a href="http://jlorenzen.blogspot.com/2007/09/how-to-create-release-using-maven2.html">How to create a release using the maven release plugin</a>". But what if you can't get the maven release plugin to work with your project? Perhaps like our team, you've got a legacy maven2 multi-module project that's been nigh impossible to use with the <a href="http://maven.apache.org/plugins/maven-release-plugin/">release plugin</a>. Our project has a mix of WAR modules combined with some Flex modules. I believe our last issue was some googlecode flex mojo wasn't working with the release plugin. Consequently, for the past year or so, we've been manually creating our releases. This actually hasn't been that much of a pain since we really only do it once a sprint at the end. Combined with my favorite <a href="http://www.debianadmin.com/howto-replace-multiple-file-text-string-in-linux.html">perl script</a> it doesn't really take that long. However, it does have the disadvantage of requiring some knowledge of what and now to do it. Ideally, it would be a job in <a href="http://hudson-ci.org/">Hudson</a>, anyone on the team could run as many times as they like.<br />
<br />
In an effort to try and <a href="http://jlorenzen.blogspot.com/2010/04/thoughts-on-fowlers-continuous.html">automate as much as possible</a>, I decided to try and automate releasing our legacy multi-module project using bash. This has several benefits: create a release faster, done consistently each time, turn-key solution anyone on the team can run that doesn't require stale documentation on how to do it.<br />
<br />
It took my several hours to essentially duplicate the maven release plugin process. Thanks to our new intern Scott Rogers and linux master Ron Alleva, I was eventually able to get it finished. It's my first "official" bash script so pardon the mess. If you've never attempted to automate your release project, first consider reading my article on <a href="http://jlorenzen.blogspot.com/2007/09/how-to-effectively-use-snapshot.html">How to effectively use SNAPSHOT</a>.<br />
<br />
Here is the script available as a gist on github: <a href="http://gist.github.com/401974">project-release.sh</a>. Here is what it does:<br />
<ol><li>Copies the current working branch (i.e. trunk) into another branch. It uses the pom.xml <scm><url> value to get the current working branch.</url></scm></li>
<li>Updates all the pom.xml version sections of the current working branch</li>
<li>Commits the pom.xml changes</li>
<li>Checks out the release branch</li>
<li>Updates all the pom.xml version sections of the release branch (basically stripping off -SNAPSHOT)</li>
<li>Commits the pom.xml changes</li>
</ol>To run this script all you have to do is run: <i>project-release.sh 2 false</i>. The first parameter (2) is the increment position that the current working branch needs to be next. For example, if trunk was on 1.2.0-SNAPSHOT and the position passed in was 2, then trunk gets updated to 1.3.0-SNAPSHOT. If the position was 3 then trunk would be updated to 1.2.1-SNAPSHOT. The second parameter is used when testing. It's like the dryRun option in the maven release plugin. When set to true, nothing gets copied or committed.<br />
<br />
A few notes about the script:<br />
<ul><li>The base branch URL is hardcoded but could easily be passed in as another parameter or placed and read from some external file.</li>
<li>It uses the cmd <i>xpath</i> to extract out the pom version, project name, and scm url. I'm on ubuntu 9.10 and according to synaptic I have libxml-xpath-perl version 1.13-6 installed.</li>
<li>It doesn't run any maven commands like mvn deploy. Other jobs in CI can accomplish that or you can easily add them into the script.</li>
<li>To run from Hudson:</li>
<ul><li>Create a New Job</li>
<li>In the Build section Add a Execute Shell Step</li>
<li>Update the Command text with: $WORKSPACE/trunk/project-release.sh 2 false</li>
</ul></ul>Overall, I'm pretty happy with the outcome. And as we start to perform more releases among multiple projects I think it's going to really come in handy. I think ideally you should try and release your project using the maven release plugin, but if that isn't possible then don't give up. Just clone.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.comtag:blogger.com,1999:blog-1280619439915049383.post-50587535299862946602010-04-14T22:43:00.004-05:002010-04-15T23:33:10.855-05:00Thoughts on Fowler's Continuous IntegrationIt's always kind of nice to go back to the basics. I've always enjoyed re-reading basic programming practices and patterns. I tend to forget the things I don't use on a daily basis. That's why I enjoyed reading Martin Fowler's article on <a href="http://martinfowler.com/articles/continuousIntegration.html">Continuous Integration</a>. The article says the last significant update occurred May 2006, but it's withstood the test of time; much like <a href="http://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar">The Cathedral and the Bazaar</a>. But if you don't have the time to read this rather long article, here are a few of the favorites I pulled out as I read it over the course of a few days. Before that, let me explain a little bit of my experience.<br />
<br />
At my first programming job we didn't really have a VCS (Version Control System) like CVS or SVN nor did we have a CI (Continuous Integration) server; we really didn't know any better. We essentially did all of our work straight off a shared drive (I know). But that was before I came to Gestalt, now Accenture, 5 years ago. Since then I've been exposed to CVS-->SVN-->Git, Ant-->Maven 1-->Maven 2, CruiseControl-->Hudson, and finally TDD (Test Driven Development). Being exposed to all of this has been a huge improvement to my career. More importantly it's been a huge benefit to how I write software and the tools our teams use such as VCS and CI. I can't image developing software without them.<br />
<br />
Here are some points out of <a href="http://martinfowler.com/articles/continuousIntegration.html">Continuous Integration</a> that I would think applies to any project java or not:<br />
<br />
<b>Work does not stop on your commit</b><br />
<blockquote>"However my commit doesn't finish my work. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done."</blockquote>So true. Just because you ran some tests locally or manually tested it, doesn't mean your done just because you checked in your changes. You've got to monitor CI to ensure it passes. This has been a topic of discussion on my team lately as we've come in in the morning with a few broken builds. Solution: check in often during the day, but don't checkin and leave and not verify CI passed. Either stay late, sign in at home, come in early, or checkin first thing the next day.<br />
<br />
<b>Simple checkout build rule</b><br />
<blockquote>"The basic rule of thumb is that you should be able to walk up to the project with a virgin machine, do a checkout, and be able to fully build the system."</blockquote>This is a very important point. Not only will this improve the productivity of new team members but also reduce the amount of time it takes to create new CI jobs. This rule is even more important for open source projects. I've had several issues in the past trying to patch open source projects and wasted several hours just trying to build their code. If you want people do contribute to your project, make it easy for them to build your software. For example, I've been wanting to write a simple <a href="http://do.davebsd.com/wiki/Docky">Docky plugin</a> for <a href="http://hudson-ci.org/">Hudson</a>, but have ran into several issues (<a href="https://answers.launchpad.net/docky/+question/99325">New Plugin</a> and <a href="https://answers.launchpad.net/docky/+question/99967">Missing Package</a>) trying to build the Do project. Have those questions really been Answered? NO! What have I done about it? I haven't retried it since. To restate Mr. Fowler, I should be able to easily checkout your code and at a minimum build it. As an added bonus it'd be nice to run unit tests as well.<br />
<br />
<b>Automate everything</b><br />
<blockquote>"However like most tasks in this part of software development it can be automated - and as a result should be automated. Asking people to type in strange commands or clicking through dialog boxes is a waste of time and a breeding ground for mistakes."</blockquote>If your just getting started with CI this can often be difficult. But your long term goal should be to automate everything. This includes creating/destroying your database, deploying/undeploying your application, automating your tests, copying configuration files around. I'd even go as far as to say automate the creation of the development environment: installing maven and java for example. Again this not only speeds up new team members productivity but also those virgin CI servers.<br />
<br />
Two great examples of this. Before we had a internal CI team, our team was manually setting up multiple CI servers with maven, java, jboss, and a database. These new servers couldn't be used until all of this stuff was manually configured. Then our internal CI team helped automate some of this stuff and we can very easily use hudson to point jobs at different servers within minutes. Something that wasn't really possible before without manually intervention. And all they really did was call a few simple ant copy commands from maven.<br />
<br />
Another good example of this comes back from our old CruiseControl and Ant days. At one point in our project we were constantly breaking a major piece of functionality and one of the main reasons was it was very difficult to test. It was a distributed test with multiple servers communicating with multiple clients via SIP. The build process called for building the latest code, stopping 2 instances of weblogic (1 local, 1 remote), starting weblogic, deploying the latest code, waiting for weblogic to finish starting (not easy mind you), and then running our automated test. This was rather huge undertaking, but given a few weeks we had the core of it automated. It was amazing. I never thought it would have been possible, but it was and anytime that test failed we knew immediately we broke something. We were able to accomplish the difficult parts by calling remote bash scripts via ssh from ant.<br />
<br />
<b>Imperfect Tests</b><br />
<blockquote>"Imperfect tests, run frequently, are much better than perfect tests that are never written at all. "</blockquote>Not exactly sure what he means by imperfect tests, but this is one place I currently disagree. It takes practice to write good tests. Once you refactor and maintain tests over a long period of time you start getting pretty good at writing tests that require less refactoring. One of the things that is killing the productivity of our team right now is what I call "cronically failing tests" or tests that randomly fail for no reason. You check the change log and nothing changed in the build which means it shouldn't have failed. You rebuild the job and it passes. Here lately this can be attributed to date comparison asserts and issues with timing. For example, the test passes when the database is local, but fails when the database is remote. Or you get different results when the time on the database server is not sync'd. The end result is this produces false negatives that really hurt the validity of CI; developers just start ignoring all failures. Once you've identified one of these cronically failing tests, it's important the author of that test, or the person who last modified it, refactor the test to be flexible. If the author doesn't do it, they will continue producing these types of imperfect tests.<br />
<br />
<b>Good Build Characteristics</b><br />
He had several comments I would wrap into good general build characteristics. Two of which are fast builds and accessible artifacts. As a general rule he suggests keeping build times to around 10 minutes. Which is usually achievable for compile/unit test jobs, but database related and above can usually take longer. My general guideline is try to keep those longer running builds to around 30 minutes, but definitely no longer than an hour. Unfortunately right now, we have several of those 40-55 minute builds I'd like to trim down some. It'd be great to see a hudson plugin that could show me how long each part of my build took.<br />
<br />
With a combination of our company maven repository and hudson, it's pretty easy to make our artifacts accessible. This is really huge as sometimes I don't waste time building certain things that take forever to build; I'll just download them from hudson. I know a lot of times our DBA will just download the zip he wants to test and prevents him from updating his source and building, etc. Another related topic is we have several nightly jobs that deploy the latest code to jboss/websphere that can be used the next day by everyone to see/test/verify the latest code.<br />
<br />
<b>Rollback Deployment</b><br />
<blockquote>"If you deploy into production one extra automated capability you should consider is automated rollback."</blockquote>This was a pretty new concept for me and one we don't necessarily follow. I've heard of Continuous Deployment, but never really heard about a rollback feature. I know we've accidentally benefited from a build failing and not deploying the latest nightly code thus allowing us to perform diff-debugging to track down a bug. We had 2 servers that built the night before, 1 passed and the other failed so it contained the previous days build. A bug was detected on the passing server and we were unable to reproduce on the outdated server. This told us it had been introduced in the past 24 hours. This isn't exactly rolling back but maybe the morale of the story is keeping a server around that is behind a day.<br />
<br />
<b>Summary</b><br />
There is a lot of good general information in this article and I would encourage anyone to take the time to read it. I only highlighted the things that really stuck out at me; there were a lot more useful things I passed mentioning.jlorenzenhttp://www.blogger.com/profile/13635369821860631868noreply@blogger.com