Setting Up Jenkins to Deploy CFML Applications as WAR Files

I finally got my act together a couple of weeks ago and set up a Jenkins server so we could start auto-deploying our applications to staging servers. Since we’re doing agile/scrum on our projects now the product owners tend to like to see changes as they happen and we also have dedicated testers involved with some of our projects, so automating deployment to a staging server is saving us a lot of time and headaches.

We run all our CFML applications on OpenBD and Tomcat and for the most part deploy applications as self-contained WAR files, so the deployment steps in this environment are:

  1. Get latest code from Subversion trunk (we use trunk for ongoing development and branch for releases)
  2. Create WAR file
  3. Transer WAR file to target server for deployment
Pretty simple. I should note at this point that I will not be covering incorporating unit tests into the build/deploy process both because I want to focus only on the Jenkins stuff for this post, as well as because that aspect of things is covered quite well elsewhere. (And I’ll be honest: we aren’t yet doing unit testing consistently enough in our code that it can be part of our build process, but we’re working towards that.)
I also won’t cover installing Jenkins since there are many resources on that as well. In my case on Ubuntu Server it was a simple matter of adding Jenkins to sources.list, doing sudo apt-get install jenkins, and then doing a bit of Apache configuration to get up and running. You can read more about installing Jenkins on Ubuntu here, and if you have specific questions about that aspect of things I can answer I’m happy to try.

Step 1: Create an Ant Build Script

As for the specifics of setting this up the first step is to create an Ant script to tell Jenkins what to do when the job runs (we’ll create the Jenkins job in a bit). This is key because without a build script Jenkins doesn’t really do much, so we’ll create a build.xml in the root of our project and then when we create the Jenkins job we can tell it which target from the build script to run.
Since Jenkins “knows” about Subversion you do not have to include anything in your build script to pull the code from Subversion. So given that our applications get deployed as self-contained WAR files, all our Ant script has to do is build the WAR file from the code Jenkins pulls from Subversion.
I should clarify that even though I’m explaining the Ant script first, Jenkins actually runs the build script after it pulls code from Subversion. I’m only pointing that out so you don’t think the Ant script runs first even though I’m covering it first. Since you can specify an Ant target when you create the Jenkins job I figured I better explain that first.
Here’s a sample build script.

The script isn’t nearly as daunting as it may look so let’s walk through it.

In the properties section at the top, that’s declaring variables we’ll use later so we don’t have to hard-code those values in multiple places in the script.

The next section is the targets and these are specific tasks that can be executed. Note that these targets may have dependencies, so if you execute a target that has dependencies the dependencies will run first in the order they are declared, and then the target you specified will run.

In the case of this script we have three targets: build, war, and init. The war target depends on build, and build depends on init, so when we specify ‘war’ as our target in Jenkins later that means init will run, then build, then war, so let’s look at these in order.

The init target at the bottom does basic cleanup by deleting the directories into which the source code is dumped and where the WAR file is built so we start with a clean slate.

The build target runs two copy jobs to get the application files into the build directory, which is just a directory to temporarily hold the files that will be included in the WAR. First the build target copies all the non-image files into the build directory, and then it copies all the image files into the build directory.

The reason for doing this in two steps is if you copy plain text and image files in the same copy job, the image files become corrupted in the process. As you can see the first copy operation excludes image files and the second includes only the image files as identified by file extension in the imageFiles property declared at the top of the script. If you have other binary files in your applications that may become corrupted (note that JAR files seem unaffected by this issue …) you’ll want to add those file extensions to the property that indicates which files in your application are binary.

Also note that I’m excluding the build and dist directories, the build.xml file, the .project file that Eclipse adds to projects, and all the .svn files so those aren’t included in the build.

So at this point after init runs we have clean directories for doing our build, and then the build target copies all the files (other than those being excluded) from the root of the project into the build directory.

The last step is to create the WAR file, and this is (not surprisingly) done in the war target in the build script. Since Ant knows how to build WAR files this is pretty simple; you just point the war command to the directory where the application files are located (the build directory in this case) and tell it the target name and location of the WAR file, which we’re putting into a dist directory.

To review, what we’ll tell Jenkins to do in a minute is to run the war target (which in turn is dependent upon the init and build targets) in our build script, which will:

  1. Run the init target which deletes and recreates the build and dist directories so we start with a clean slate
  2. Run the build target which copies all the code and binary files from the root to the build directory
  3. Run the war target which creates a WAR file from the code in the build directory and puts it in the dist directory
Once you have your Ant script created, save it as build.xml in the root of your project and commit that to SVN so Jenkins will have it available when it runs the build.

Step 2: Create a Jenkins Job

With the hard part out of the way, next you’ll need to create a job in Jenkins by clicking on “New Job” in the top left of the Dashboard.
Give the job a name, select “Build a free-style software project” and click “OK.”

Step 3: Point Jenkins to Your SVN Repository

On the next screen you can configure some additional settings like whether or not to keep previous builds, if there are any build parameters you need to specify, etc., but we’ll focus on the SVN configuration for the purposes of this post.
Select Subversion under Source Code Management and enter the details about your repository. This tells Jenkins where it’s going to get the code to do the build.

Be sure to give Jenkins the full path to the appropriate directory in Subversion. For example if you build from trunk and your project name is foo, your URL would be something like http://subversionserver/foo/trunk not just http://subversionserver/foo

As a reminder, since we deploy our CFML applications as WAR files using OpenBD, our SVN repository includes not only our application’s source code but also the OpenBD engine, so this is traditional Java web application deployment. This is a great way to do things because the application is truly self-contained and all the configuration such as datasources, mappings, etc. is all in SVN. This way you can pull down the project from SVN and be up and running instantly, and it makes deployment really simple.

Step 4: Set the Jenkins Build Trigger

At this point Jenkins knows where your code is but we need to tell Jenkins what triggers the build process to run. You can do this multiple ways but in my case I simply set up Jenkins to poll SVN every 5 minutes to check for new code. Another common way to do this is to use a post-commit hook in SVN to hit a Jenkins URL that triggers the build, but polling is working well for us.

Scroll down to the Build Trigger section of the configuration screen.

Check the box next to “Poll SCM,” and then you can set the polling schedule using crontab style notation. Mouse over the ? next to the box if you need a refresher on the syntax, but in the example I have here that syntax tells Jenkins to poll SVN every five minutes to see if there are any changes in SVN. If there are changes the build will be triggered. We’ll review what happens when the build is triggered at the end of this post.

Step 5: Set the Ant Build Target

Just a couple more steps in configuring the Jenkins job. Next we need to tell Jenkins which target in build.xml to run as part of the build. Calling build.xml is kind of an implied step with Jenkins since you don’t have to explicitly tell it to look for build.xml. It’s assumed you’ll have an Ant script in the root of your project and that either the default target or a specific target will be run as part of the build process.
In the Build section of the configuration page, specify ‘war’ as the target to run from your build.xml file in the root of your project.
At this point Jenkins will:
  1. Poll SVN every 5 minutes to check for changes
  2. If there are changes, Jenkins will pull everything from SVN
  3. After everything is pulled from SVN, Jenkins will execute the war target from build.xml which will generate a WAR file that can be deployed to Tomcat (or any servlet container)
The last step is getting the generated WAR file to a target server.

Step 6: Configure the Post-Build Action to Deploy the WAR

One of the great things about Jenkins is the huge number of plugins available, and we’ll be using the SCP plugin in this final step. There are also deployment plugins for various servlet containers but since in the case of Tomcat that involves hitting the Tomcat manager and uploading the WAR over HTTP, I found SCP to be much more efficient and flexible.
After you install the SCP Plugin you need to go to “Manage Jenkins” and then “Configure System” to configure your SCP target, user name, and password. These are configured globally in Jenkins and then you simply select from a dropdown in the post-build action section of the Jenkins project configuration.
In the post-build action section of the project configuration:
  1. Check the box next to “Publish artifacts to SCP repository”
  2. Select the appropriate SCP site in the dropdown
  3. Specify the artifact to copy to the server. In our case this is the WAR file, and you specify the path and filename relative to the root of the Jenkins project. For example if you check things out from an SVN directory called ‘trunk’ and use the same directories in the Ant script above, your WAR file will be in trunk/dist/foo.war
  4. Specify the destination relative to the path you specified when you set up the SCP server, if necessary. If you specified Tomcat’s webapps directory as the root in the SCP server and all your projects live in that directory you may not need to specify anything here.
One more configuration issue to note — in the case of Tomcat you need to make sure and have the host for your application configured to auto-expand WARs and auto-deploy. This way when the WAR copy is complete Tomcat will deploy the new version of the application.

Summary

As with most things of this nature it took a lot longer to write this blog post than it will to set this stuff up. The only even remotely involved portion of all of this may be tweaking the Ant script to meet your needs but the rest of the process is pretty straight-forward.
At the end of all of this we wind up with a Jenkins job that polls SVN for changes every 5 minutes and if there are changes, this triggers the build. The build process:
  1. Pulls down changes from SVN
  2. Runs the war target in the build.xml in the root of the project, which …
    1. Dumps a clean copy of the application into the build directory
    2. Creates a WAR from the build directory and puts the WAR into the dist directory
  3. Copies the WAR file to a target server using SCP
Once the WAR file is copied to the target server, provided that Tomcat is configured to do so it will redeploy the application using the new WAR file.
There’s of course a bunch of different ways to configure a lot of this but this is working well for us. If you have other approaches or if anything I’m doing could be improved upon, I’d love to hear how you’re using Jenkins.

A Short Missive Concerning SQL Server, Named Instances, and JDBC

Since this seems to come up with some regularity on mailing lists and I happen to be in the midst of a massive SQL Server migration (lucky me) at the moment, I figured I'd set the record straight on this topic once and for all.

Named instances in SQL Server are not magic. Like everything else on servers, they run on a port. This named instance nonsense is simply Microsoft's way to get around expecting people to use a port number. (Because as we all know, having to deal with numbered ports is probably the biggest headache anyone in IT has to deal with. Yes, I'm being sarcastic.)

Anyway, even though the MS tools would like you to think this is all magic, not only do named instances run on a port, you actually pay a penalty by not referring to the port directly in your connection strings.

Why? Because again, there is no magic in IT, and the named instance doesn't mean squat to anything but SQL Server itself. So if you give SQL Server a named instance when trying to connect, there's an additional round trip to the server so SQL Server can translate that named instance you're asking for into a port number and return that to the thing connecting to it, at which point the connection is established using the port number. MS says so themselves here if you don't want to take my word for it.

Hope that settles it. Use the port numbers and forget about all this named instance voodoo, because that's all it is. Voodoo.

Intro to Google App Engine for Java and CFML Developers

At OpenCF Summit 2011 we were very lucky to have Chris Schalk from Google come present on Google App Engine. If you’re not familiar with Google App Engine (GAE) you should be! It’s an absolutely fantastic application platform as a service offering from Google with great functionality, very slick features, and incredibly generous quotas for free application hosting. And if you need to go beyond these quotas, you simply configure billing and pay nominal fees for what you use over the free quotas.

GAE lets you deploy Python and Java applications, but one of the most interesting things going on with Java these days is the numerous different languages that run on the JVM. The Java platform being available on GAE opens up some very cool options.

At OpenCF Summit I followed Chris’s presentation with one specific to Open BlueDragon on Google App Engine (GAE). OpenBD is a Java-based CFML runtime engine that allows you to deploy CFML applications to any standard servlet container, and also allows you to deploy your CFML applications to GAE. This is a great option for CFML developers since it’s a quick and easy–not to mention free for many apps!–way to get your CFML applications online without having to worry about setting up a server yourself or getting a hosting account. Not to mention that if you deploy your CFML apps on GAE you get the benefits of running on Google’s infrastructure and have on-demand scalability for your apps.

If you’re not familiar with CFML, it’s an incredibly powerful dynamic scripting language and framework that runs on the JVM. Think of it as a Java Tag Library on steroids. Even if you choose to build the backend of your applications in Java, CFML is a fantastic view layer language that’s a great alternative to JSP, it interacts seamlessly with Java code, and it makes a lot of things that are quite verbose in Java extremely quick and easy. Of course CFML is a full-fledged language as well so you can build entire applications in it quickly.

The tools available for GAE make it very easy to work with. If you use Eclipse, a great option is to grab the GAE plugin for Eclipse. This gives you the entire GAE environment that will run right inside Eclipse and let you develop and test locally. Then when you’re ready to deploy to GAE, it’s a right-click away.

We also have great tools for OpenBD. In addition to being able to grab the GAE edition of OpenBD and drop that into an Eclipse project, you can use the new OpenBD Desktop. This is a desktop application that runs on GNU/Linux, Mac, and Windows, and lets you set up a local development server in seconds. Once development is complete you can then deploy your CFML application to a standard JEE WAR, or you can deploy straight to GAE from OpenBD Desktop.

Openbd_desktop

In this post I’m going to cover getting up and running with GAE in Eclipse, and in my next post I’ll go over the demo application I built for my presentation at OpenCF Summit, and in that post I’ll highlight some of the cool features not only of OpenBD for GAE but GAE itself.

I’m going to cover how to set things up in Eclipse in this blog post, but I’ll have another how to and screencast covering OpenBD Desktop soon.

Installing the Google App Engine Plugin for Eclipse

I’m going to assume my audience is mostly Java or CFML developers who are already somewhat familiar with Eclipse, but if you need assistance with this piece of things please leave me a comment and I will be happy to help.

The GAE plugin for Eclipse installs in the same way as any other add-on for Eclipse. You simply open Eclipse, go to Help -> Install New Software and paste in the appropriate update site URL for your version of Eclipse. This will download everything you need to work with GAE from within Eclipse, including the GAE for Java SDK.

Once the plugin installs and you restart Eclipse, you’ll notice new Google icons in your Eclipse toolbar:

Gae_icons

As well as a new Google right-click menu:

Gae_right_click_menus

To create a new GAE project, you go to File -> New -> Project, and in the Google folder choose Web Application Project.

New_gae_project

You can then run your GAE application from within Eclipse by right-clicking the project and choosing Run As -> Web Application. Running in debug mode and all the other Java functionality with which you may be familiar is of course also available.

Note that because GAE is a platform as a service offering, the entirety of the Java world isn’t necessarily available to you. If you’re curious what is and isn’t available check the JRE Class Whitelist in the GAE docs.

Installing Open BlueDragon for GAE

With a new GAE project created in Eclipse, installing OpenBD for GAE is as easy as downloading a zip file, unzipping, and copying files into your GAE project’s war directory.

When you download and unzip OpenBD for GAE you’ll see these contents:

Openbd_gae_contents

If you’re new to OpenBD for GAE you’ll want to read the README files included.

To add OpenBD to your GAE project, go into the war directory in the unzipped OpenBD for GAE directory, and copy all the files in the OpenBD war directory into the war directory in your Eclipse GAE project. Note that you will overwrite any files with the same name in the Eclipse project, which is what you want to do.

What you do not want to do is delete all of the existing files in the war directory in the Eclipse project and have only the OpenBD GAE war files in the Ecliipse project. To put it another way, you are merging the OpenBD for GAE files with the files that are already in the Eclipse project, and any files with the same name will be replaced by the OpenBD for GAE files.

With these files in place, right-click on the project in Eclipse and choose Run As -> Web Application. You should see something similar to this in the Eclipse console:

Openbd_gae_console_output

You may see some warnings as well but these are typically harmless, and if you already have something running on port 8888 you’ll want to shut that down before launching the OpenBD GAE application.

If everything started up successfully you can then navigate to http://localhost:8888 in a browser and see this:

Openbd_gae_welcome_page

You’re now all set to build CFML applications for GAE!

Working with CFML Code

Working with CFML code in an OpenBD for GAE project is no different that typical CFML development. The only real thing you need to be aware of is that your CFML code must be placed in the war directory in your Eclipse project. This is the root of your application. (Note that if you haven’t worked with CFML or perhaps use a CFML editor other than Eclipse, you’ll want to install CFEclipse, which is a great open source CFML plugin for Eclipse.)

Let’s add a CFML file to our project so you can get a feel for working with CFML code in the context of an OpenBD for GAE project.

In your Eclipse project right-click the war directory and choose New -> File and name the file test.cfm. In this newly created file, add the following code:

<cfset name = "Matt" /> <cfoutput>Hello #name#! Today is #DateFormat(Now())#.</cfoutput>

Save the file, and then go to http://localhost:8888/test.cfm in your browser. You should see this:

Openbd_gae_test_file

That’s all there is to it. You can now build CFML applications as usual using OpenBD for GAE.

Up until recently there have been some differences in supported syntax and functionality between “regular” OpenBD and OpenBD for GAE, but as of the next release of OpenBD the regular Java edition and the GAE edition will have the same exact functionality available, other than where specific functionality is not allowed on the GAE platform. The current nightly builds of OpenBD are based on this new unified codebase between the two editions of OpenBD.

Most CFML code will work fine on OpenBD for GAE. For example, the Mach-II framework as well as ColdSpring both work perfectly, and these frameworks are being used for the open source Enlist application that we started developing during the hackfest at OpenCF Summit.

What’s Next?

Probably the major thing developers will run into immediately when building apps for GAE is that a traditional relational database is not available other than through Google App Engine for Business. If you aren’t on the GAE for Business platform, you’ll be using the Google Datastore, which is a high-performance, highly scalable key-value (“NoSQL”) datastore that can be accessed via JDO or JPA, and also in CFML via GAE-specific functionality built into OpenBD for GAE.

I’ll cover the Google Datastore as well as some of the amazing features of the GAE platform (including receiving mail and receiving/sending XMPP messages) in my next post.

Quick Tip on Logging with Java on Google App Engine

I’m in the process of messing around with some really cool stuff on Google App Engine (wonder why …) and I ran into something that I didn’t see covered in the docs (which as a rule are excellent). I came across the solution in a sample app for GAE that solved the problem so I thought I’d share.

Setting up logging in a servlet is pretty straight-forward, particularly if you’re using the Eclipse Plugin for GAE since it more or less configures it for you. Here’s a quick example. This is dummy code obviously, but pay attention to the package delcaration.

package org.opencfsummit;

import  java.util.logging.Logger;

import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse; 
import java.servlet.ServletException; 

public class MyServlet extends HttpServlet {

    private static final Logger log = Logger.getLogger(MyServlet.class.getName());

    public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        // do whatever else your doPost method needs to do here, and write some info to the log
        log.info("I'd like to write this to the log, please.");

    }

}

That all seemed well and good, but when I looked for my log entries in the GAE admin console, nothing was getting logged.

Well, turns out that even though the Eclipse plugin sets up the basic logging configuration for you, there’s a bit of additional configuration needed related to the package in which the class you’re logging from resides.

The default WEB-INF/logging.properties file contains this:

# Set the default logging level for all loggers to WARNING
.level = WARNING

Now what didn’t make sense to me until I found an example and tried it was why my log.info() calls weren’t doing anything. Maybe I’m missing something, but I had to add this to logging.properties to get things to log:

org.opencfsummit.level = INFO

Now why if you’re explicitly calling log.info() it wouldn’t log the information at the info logging level I’m not sure, but adding my package name and a level to logging.properties definitely took care of the issue. 

Dynamically Invoking Method Names On a Java Object From CFML

A co-worker contacted me today asking how he might go about solving what turned out to be a rather interesting issue.

From a CFML application (running on Open BlueDragon) he’s calling a .NET web service and getting an array of objects back. By the time all the SOAP magic happens the objects in the array on the CFML side are Java objects.

What he wanted to do next was loop over this array of Java objects and, for each object in the array, call every getXXX() method contained in the object. But the application is getting numerous different types of objects back, some of which have a large number of get methods in them, and he didn’t want to have to hard-code a method call for each one. In addition, the get methods may change from time to time and while we’re supposed to be notified when changes occur, we didn’t want to rely on that.

So consider the following pseudocode:


<cfinvoke webservice="url to webservice"
    method="methodname"
    returnvariable="myJavaObjects" />

So at this point the variable myJavaObjects is an array of homogeneous Java objects.

Next, we want to loop over that array and for each Java object, call all of its get methods.

My first thought was that this is one of those rare cases where Evaluate() might be justified. But I also thought there had to be a better way, so perhaps against my co-worker’s will we spent about an hour hammering through some experiments. I’ll spare you the various things we tried and cut to the chase of the final solution.

One thing I learned while working through this is that CFINVOKE works on Java objects. Who knew? OK, maybe you knew, but I hadn’t ever had cause to try it before so I didn’t know.

So step one is once we get the array of Java objects back from the web service, since we know they’re homogeneous objects, we’ll just use some Java reflection magic on the first one to create an array of the method names beginning with get:


<!--- this returns an array of java.lang.reflect.Method objects --->
<cfset methods = myJavaObjects[1].getClass().getMethods() />
<!--- now we'll create an array of method names starting with get --->
<cfset methodNames = [] />
<cfloop array="#methods#" index="method">
    <cfif Left(method.getName(), 3) == "get">
        <cfset ArrayAppend(methodNames, method.getName()) />
    </cfif>
</cfloop>

So now we have an array of strings that are the method names from the Java object that start with get. In the actual application we’re actually omitting some of the methods starting with get because they’re not relevant (e.g. getClass(), getSerializer(), etc.) so I’m just keeping it simple for the purposes of illustration.

The next step is to loop over the array of Java objects, and on each loop iteration, call each get method and for demo purposes simply output the results. Here’s where we use CFINVOKE to call methods dynamically on the Java objects:

<cfloop array="#myJavaObjects#" index="javaObject">
    <cfloop array="#methodNames#" index="methodName">
        <cfinvoke component="#javaObject#" method="#methodName#" returnvariable="foo" />
        <cfoutput>#foo#<br /></cfoutput>
    </cfloop>
</cfloop>

And with that, we’re getting an array of Java objects back from a .NET web service (with the .NET object to Java object translation being handled transparently by the web service engine of course), we’re using a bit of Java reflection to get a list of the getters from the Java object, and we’re then looping over the array of Java objects and calling the get methods on each Java object.

As an aside, during experimentation we went down the path of using Java reflection directly, but that got pretty messy and didn’t seem to offer any benefit over doing things at a higher level in CFML. Interestingly, while we were messing with some things we had an error generated from CFINVOKE reminding me that under the hood, CFINVOKE is doing all the Java reflection nastiness for you.

Not sure how handy a tip that will be for others but I wanted to blog it while it was still fresh in my mind. There’s probably several other ways to solve this problem so if others have approached this differently I’d love to hear about it.

Polyglot Web Development With the Grails Framework #s2gx

Jeff Brown – SpringSource

Polyglot?

  • "many languages"
  • writing software in multiple languages
  • some people would say if you do any web development, you're doing polyglot
    • javascript, css, html, java, etc.
  • in the context of this talk, we'll be talking about implementing the actual business logic with multiple languages

Languages on the JVM

  • 200+ languages available on the JVM
  • many of these aren't exactly practical, but many are
  • at least 10-12 programming languages available on the JVM that could be used for serious development
  • big players are java, groovy, clojure, scala, jruby, jython
  • which of these is the best? no answer of course
    • personal preference, best tool for the job, etc.
  • many of these languages solve specific problems really well
  • all these languages are turing complete, so anything you can do in one you can do in another
  • but depending on the problem you're trying to solve, you may find one language or another is ideally suited to the task at hand
  • reached a point with CPU development where the speed of light is a factor in terms of increasing the speed
    • can't really make processors any faster with the current method of developing processors
    • instead of making faster processors, we're using more processors and multiple cores
  • concurrency is becoming more and more important
  • OO languages don't lend themselves to managing concurrency very well
    • allocate objects on the heap, objects in a shared mutable state
    • best we can do in OO languages is use locks so multiple threads can't access things at the same time
    • problem with locking is it's opt-in
  • functional languages make the concurrency problem almost disappear
    • no such thing as destructive assignment in a pure functional language
    • clojure and scala *do* allow destructive assignment
    • but in clojure, for example, you have to do this in a transaction
      • get a snapshot of the heap, and nothing can change on the heap while you're making changes
  • because of the advantages in terms of concurrency, it will be more common to use polyglot programming moving forward
    • e.g. write much of the application in groovy, but build parts of the application using a functional language
  • ultimately all this will run on the jvm, but we can take advantage of the best things each language has to offer
  • pretty different from the past–would have been rather unusual to write a C++ program that had other languages mixed in

Grails?

  • full stack MVC platform for the JVM
    • build system down to ORM, etc.
  • leverages proven staples
    • spring, hibernate, groovy, quartz, java, sitemesh
  • extensible plugin system
    • e.g. can pull out hibernate and use a different persistence mechanism, or can write your own plugins
  • since grails is built on the jvm you can take advantage of any language that will run on the jvm

Demo

  • showing how to write a grails app that uses a groovy controller, but a "math helper" class can be written in groovy, or java, or clojure
  • as long as there's a bean in the spring context, regardless of language, it can be injected into grails controllers
  • the grails controller doesn't care what language the classes it uses are written in

Clojure

  • core grails doesn't have clojure support, but there's a clojure plugin
  • plugin creates an src/clj file for clojure source
  • code in the grails controller doesn't have to change to take advantage of the math helper written in the different languages


(ns grails)
(defn addNumbers [x y]
  (+ x y ))

Using the Clojure Plugin

  • call via clj.mathHelper.addNumbers(x, y)
  • clj is the same as calling getClj()
  • the cloure plugin adds the getCjl method to all of the grails classes
    • classes*.metaClass*."getClj" = { return proxy } — this is done in the withDynamicProperties method in the plugin config
    • the proxy is an instance of the clojure proxy class (grails.clojure.ClojureProxy)
  • no addNumbers method in the proxy class — uses methodMissing
  • plugin looks for clojure methods in the grails namespace by default
    • if you need another namespace, clj['mynamespace'].methodName() will handle this
  • clojure plugin declares that it wants to watch all the files in src/clj
    • gets notified when files change and compiles the files if they change
  • if your plugin is adding things to the metaclasses on application startup, then you need to make sure your plugin also modifies controllers, services, etc. as they're changed while the application is running
  • can also observe only specific plugins for changes, e.g. only notify me when something involved with hibernate changes
  • can swap out other view technologies in grails
  • for taglibs, they have to be written in groovy, but inside the taglib you could be making calls to code in other languages

Who gets the credit?

  • grails? groovy? clojure? java? the jvm?
  • really it's the combination of all of them
  • don't have to walk away from grails to take advantage of any of the languages that run on the jvm

How to Analyze Your Data and Take Advantage of Machine Learning in YourApplication #s2gx

Christian Schalk – Google
Google’s New Cloud Technologies

  • google storage for developers 
    • api compatible with amazon s3
  • prediction api (machine learning)
  • bigquery

Google Storage

  • store your data in google’s cloud 
    • any format, any amount, any time
  • you control access to your data 
    • private, shared, public
  • access via google apis or third party tools/libraries
  • sample use cases 
    • static content hosting, e.g. static html, images, music, video
    • backup and recovery
    • sharing
    • data storage for applications 
      • e.g. used as storage backend for android, appengine, cloud based apps
    • storage for computation 
      • bigquery, prediction api

Google Storage Benefits

  • high performance and scalability 
    • backed by google infrastructure
  • strong security and privacy 
    • control access to your data
  • easy to use 
    • get started fast with google and third party tools

Google Storage Technical Details

  • restful api 
    • get, put, post, head, delete
    • resources identified by uri
    • compatible with s3
  • buckets — flat containers
  • objects 
    • any type
    • size: 100 gb / object
  • access control for google accounts 
    • for individuals and groups
  • two ways to authenticate requests 
    • sign request using access keys
    • ???

Performance and Scalability

  • objects of any type and 100GB/object
  • unlimited numbers of objects, 1000s of buckets
  • all data replicated to multiple US data centers
  • leveraging google’s worldwide network for data delivery
  • only you can use bucket names with your domain names
  • read-your-writes data consistency
  • range get

Security and Privacy Features

  • key-based authentication
  • authenticated downloads from a browser

Getting Started with Google Storage

  • go to http://code.google.com for basic info
  • http://code.google.com/apis/storage (currently in preview mode) 
    • getting started guide, docs, etc.
    • can sign up for an account
  • command line tool available — gsutil — low-level access from the command line, scripting
  • google storage manager — web-based tool for managing google storage

Google Storage Usage Within Google & Early Adopters

  • google bigquery
  • google prediction api
  • google.org — imagery
  • google patents
  • panoramio
  • picnik
  • vmware
  • US Navy
  • theguardian
  • socialwok
  • xylabs
  • etc.

Pricing

  • storage: 0.17/gb/month
  • also costs for up/downloads
  • similar pricing to amazon s3
  • preview in US 
  • non-US preview available on case-by-case basis

Google Prediction API

  • google’s sophisticated machine learning technology
  • available as an on-demand restful http web service
  • provide a bit of text and “train” the algorithm in the service to predict outcomes based on patterns 
  • simple example: language detection 
    • provide series of examples of english, spanish, french, etc. and train the prediction api to recognize the language
  • endless number of applications 
    • customer sentiment
    • transaction risk
    • etc

Prediction API Examples

  • predict and respond to emails in an automated way

Using the Prediction API

  • three step process 
    • upload training data to google storage
    • build a model from your data
    • make new predictions

Training

  • POST prediciton/v1.1/training?data=mybucket…
  • can respond when the prediction engine is ready and gives an estimate of accuracy

Predict

  • apply the trained model to make predictions on new data
  • returns json data
  • includes scores indicating confidence of prediction

Prediction API Capabilities

  • data 
    • input features: numeric or unstructured text
    • output: up to hundreds of discrete categories
  • Training 
    • many machine learning techniques

Prediction Demo

  • cuisine predictor
  • spreadsheet of type of food (e.g. mexican, italian, french) and food description as training data
  • upload spreadsheet to google data storage
  • kick off training process, then can check to see if it’s done
  • pretty accurate predictions even on a limited training dataset

Google BigQuery

  • also resides on top of google storage
  • can have large amounts of data that you can quickly analyze using sql-like language
  • fast, simple to use

Use Cases

  • interative tools
  • spam
  • trends detection
  • web dashboards
  • network optimization

Key Capabilities

  • scalable to billions of rows
  • fast–response in seconds
  • simple–queries in sql
  • webservice based–rest, json

Using BigQuery

  • upload to google storage
  • call bigquery service to import raw data into bigquery table
  • perform sql queries on table

Security and Privacy

  • google accounts
  • oauth
  • https

Tools

  • bigquery shell utility available — just type sql commands and get responses back
  • can tie in a google spreadsheet and point it to a bigquery table

Google App Engine for Business 101 #s2gx

How to Build, Manage & Run Your Business Applications on Google’s Infrastructure
Christian Schalk – Developer Advocate, Google

  • not really an advocacy position
  • still in engineering, but work a lot more with users directly
  • go out to companies to help them be successful

What is cloud computing?

  • lots of different definitions
  • pyramid of (bottom up): 
    • infrastructure as a service 
      • joyent, rackspace, vmware, amazon web services
      • provides cooling, power, networking
    • application platform as a service 
      • GAE falls in this category
      • tools to build apps
    • software as a service 
      • google docs, etc.

GAE

  • easy to build
  • easy to maintain
  • easy to scale 
    • appengine resides in google’s overall infrastructure so will scale up as needed
  • started with only python
  • with java support, opened the doors for java enterprise developers

By the Numbers

  • launched in 2008
  • 250,000 developers
  • 100,000+ apps
  • 500M+ daily pageviews 
    • 19,000 queries per second — has almost doubled since January

Some Partners

  • best buy
  • socialwok
  • xylabs
  • ebay
  • android developer challenge
  • forbes
  • buddypoke 
    • 62 million users
  • gigya 
    • do social integration for large media events (movie launches, sports events) — huge spikes in traffic so GAE just handles it
  • ubisoft
  • google lab
  • ilike
  • walk score
  • gigapan
  • others
  • point here is it’s very easy to drop specific apps on GAE without running litearlly everything on GAE
  • very popular among social networking apps because of easy scalability

Why App Engine?

  • managing everything is hard
  • diy hosting means hidden costs 
    • idle capacity
    • software patches & upgrades
    • license fees
  • “cloud development in a box”

App Engine Details

  • collection of services 
    • memcache, datastore, url fetch, mail, xmpp, task queue, images, blobstore, user service
  • ensuring portability — follows java standards 
    • servlets -> webapp container
    • jdo/jpa -> datasource api
    • java.net.URL -> URL fetch
    • javax.mail -> Mail API
    • javax.cache -> memcache
  • extended language support through jvm 
    • java, scala, jruby, groovy, quercus (php), javascript (rhino)
  • always free to get started
  • liberal quotas for free applications 
    • 5M pageviews/month
    • 6.5 CPU hours/day

Application Platform Management

  • download and install SDK 
    • Eclipse plugin also available
  • build app and then deploy to the public GAE servers
  • app engine dashboard
  • app engine health history 
    • shows status of each service individually across GAE as a whole

Tools

  • google app engine launcher for python
  • sdk console 
    • local version of the app engine dashboard
  • google plugin for eclipse 
    • wizard for building new app engine apps
    • can run the entire gae environment locally within eclipse
    • easy deployment to app engine servers
    • in process of building a new version of this with more features

Continuously Evolving

  • aggressive schedule for providing new features
  • may 2010 — app engine for business announced

What’s New?

  • multi-tenant apps with namespace API
  • high performance image serving
  • openid/oauth integration
  • custom error pages
  • increased quotas
  • app.yaml now usable in java apps
  • can pause task queues
  • dashboard graphs now show 30 days
  • more — see http://googleappengine.blogpost.com

Getting Started

Creating and Deploying an App

  • demoing eclipse plugin
  • can create a new Google Web Application, optionally with GWT
  • projects follow the typical java webapp structure
  • before deployment, can test/debug locally just like any Java project in eclipse
  • even the datastore is available locally for development/testing
  • new features tend to be introduced in python first, then java gets them later
  • to deploy, right click the project, choose “google,” then deploy 
    • this brings up a window where you put in your application ID and version, then uploads to the GAE servers
  • can log into GAE dashboard and configure billing with maximum charges if your app will exceed the free quotas
  • can use your own custom domains, this ties into google apps
  • can assign additional developers to GAE applications by email address
  • can deploy new versions of applications and keep the old ones as well, can toggle between versions and choose one as default

What about business applications?

  • GAE for Business
  • same scalable cloud hosting platform, but designed for the enterprise
  • not production quite yet
  • enterprise application management 
    • centralized domain console (preview available today)
  • enterprise reliability and support 
    • 99.9% SLA
    • direct support 
      • tickets tracked, phone support, etc.
  • hosted SQL (preview available today) 
    • managed relational sql database in the cloud
    • doesn’t replace the datastore–available in addition to the datastore
  • ssl on your domain 
    • current core product doesn’t offer this
  • secure by default 
    • integrated single signon
  • pricing that makes sense 
    • apps cost $8/user, up to a max of $1000 per month

Enterprise App Development With Google

  • GAE for Business
  • Google Apps for Business
  • Google Apps Marketplace
  • Firewall tunneling technology available (Secure Data Connector)

App Engine for Business Roadmap

  • enterprise admin console (preview)
  • direct support (preview)
  • hosted sql (limited release q4 2010)
  • sla (q4 2010)
  • enterprise billing (q4 2010)
  • custom domain ssl (2010 – 2011)

SQL Support

  • can run this all locally in eclipse
  • demo of spring mvc travel app running on GAE with the SQL database 
    • have to explicitly enable sessions
    • had to disable flow-managed persistence

Become an App Engine for Business Trusted Tester!

Developing Social-Ready Web Applications #s2gx

Craig Walls – SpringSource

  • working on Spring Social, which is the brains behind Greenhouse (web/mobile conference app for SpringOne)

Socializing Your Applications

  • why would you want to do this?
  • this is where your customers are–lots of people spend a LOT of time on Facebook
    • if they're there, you want to be there with them
  • Facebook–over 500 million active users
    • third largest country in the world
    • 50% log on to Facebook on any given day
    • there's even a movie about it–that says something
  • Twitter — over 100 million users
    • more than 190 million unique visitors monthly
    • more than 65 million tweets per day
  • Others: LinkedIn (80 million members), TripIt (230,000 trips planned per month)
  • More: FourSquare, YouTube (2 billion videos viewed per day), MySpace, Gowalla, Google, Flickr
  • how do you use this to better your application?
    • really depends on the customers and applications
    • don't want to make people come to you, better to interact with people where they already are
    • you can have your customers tell you things about themselves and this data would be hard to get otherwise

Types of Social Integration

  • widgets
    • facebook xfbml/js; the "like" button
      • xfbml — tag library that's interpreted on the client by javascript
    • twitter @anywhere
    • linkedin widgets / linkedin jsapi
      • jaspi resembles xfbml
  • embedded
    • facebook applications
    • igoogle gadgets
    • myspace applications
  • rest api
    • provided by virtually all social networks
    • consumed by external and embedded applications

Widgets

  • facebook connect
    • xfbml tag on page adds the login button to any page (<fb:login-button …>Connect to Facebook</fb:login>
    • demoing "find my facebook friends" functionality (<fb:multi-friend-selector …> — fbml tags that run on the server)
  • twitter @anywhere offers some javascript-based widgets, e.g. follow, connect with twitter
    • can also linkify and hovercard text–does this with a class to add the links and javascript handles adding links (hovercard is the thing that shows the little twitter profile boxes for users)
    • twitter anywhere has great examples in their documentation

Facebook Embedded Applications

  • hosted on your own servers, but look seamless when you're on facebook (look like they're part of facebook)
  • can leverage widgets, REST APIs, javascript apis, etc.
  • most often used for games, quizzes, surveys, etc.

Accessing Social Data with REST Social APIs

  • common operations
    • get user profile
    • get/update status
    • get list of friends
  • specialized operations
    • facebook: create photo album, create a note, etc.
    • twitter: create/follow a list, view trends
    • tripit: retrieve upcoming trips, view friends nearby
  • all done with restful apis
    • most support both json and xml representations

Searching Twitter RestTemplate rest = new RestTemplate(); String query = "#s2gx"; String results = rest.getForObject("http://search.twitter.com/search.json?q={query}", String.class);

  • if you want to get friends on twitter, you get the user IDs back, so you have to make another call back to get info about the user based on the user id

Facebook Graph API

  • interesting form of REST API
  • two basic url patterns
  • if you don't have an authorization key you only get very basic info back (name, gender, country)

Securing Social Data: OAuth is the key to social data

  • most social data is secured behind oauth
  • authentication takes place on social provider
  • consumer application given an access token to access user's profile
    • this gets around having to give another application your login credentials
    • also lets you revoke access for specific applications
  • consumer never knows the user's social network credentials
  • demo of trying to post a tweet without being authorized–throws a 401 error
  • when you sign in via oauth you're signing into the originating application (e.g. facebook) and then facebook tells the application "yes, the provided the correct authentication and have given you permission to do what you told them you were going to do"
    • click "connect with facebook" button from an application
    • box pops up from facebook where the user logs in and grants permissions
    • facebook then makes the connection and gives the application an access key

Comparing OAuth and OpenID

  • openid
    • primary concern is single sign-on
    • shared credentials for multiple sites
    • authentication takes place on your chosen openid server
  • oauth
    • concern is shared data
    • sign into the host application
    • host application then gives some other application access
  • if you sign on via oauth the underlying mechanism could be openid

Versions of OAuth in Play

  • OAuth 1.0: tripit
  • OAuth 1.0a: twitter, linkedin, foursquare, most others
  • OAuth 2: still in draft; early adoption by facebook (not quite full oauth 2) salesforce, gowalla, github, 37signals
    • on target to go final by the end of the year

Signing a request: OAuth 1.0a

  • construct a base string that includes …
    • the http method
    • the request url
    • any parameters (including post/put body parameters if the content type is "application/x-www-form-urlencoded")
  • encrypt the base string to create signature
    • commonly hmac-sha1, signed with api secret
    • could be plaintext or rsa-sha1 (if supported)
  • add authorization header to request

The OAuth 2 Dance — much simpler than oauth 1

  • request authorization from user
  • return to consumer with the authorization code in the request
  • exchange auth code and client secret for access token
  • return access token to consumer for use in REST API calls

Easy Facebook OAuth

  • <fb:login-button perms="email.publish_stream,offline_access">Connect to Facebook</fb:login-button>
  • offline access = the application can access your facebook account at any time
  • oauth 2 gives you the option to create an access token that will expire after a period of time
  • oauth 2 also has a renewal token so you can renew expired tokens, but facebook doesn't support renewal tokens yet
  • if you give the application the "give this app access at any time" it's really just a way to not have the access token expire
    • currently access tokens expire after about an hour
  • once you authorize with FB, you get a cookie back called fbs_appKey (where appKey is your application's key)
    • cookie also includes the access token and user id
  • if you store access tokens in your application's local database, you should store them encrypted
  • once you have the access token, you make the same call to facebook but pass the access token, and then you get a lot more of the profile info from facebook

Social REST API Challenges

  • signing a request for oauth 1.0(a) is difficult when using Spring's RestTemplate
  • each social provider's api varies wildly
  • getting a facebook access token requires parsing the cookie string
  • how should various http response codes be handled?

Spring Social

  • supports social integration in Spring
  • born out of Greenhouse development

TwitterTemplate

  • simplifies signing of OAuth 1 requests through RestTemplate
  • Offers consistent API template-based API across social providers
  • extends spring MVC to offer Facebook access token and user ID as controller parameters
  • maps social responses to a hierarchy of social exceptions
  • Spring Social can get at the actual response to a 4XX error code which you can't get if you're using RestTemplate directly
  • similar to using JdbcTemplate which gives you more detail than the raw sql exceptions
  • Spring Social includes TwitterTemplate to make interacting with twitter much easier

FacebookTemplate

  • a bit simpler since all that's needed is the access token
  • FacebookTemplate facebook = new FacebookTemplate(ACCESS_TOKEN);
  • String profileId = facebook.getProfileId();
  • also linkedin template and tripittemplate

Spring Social Next Steps

  • expanding available operations in social templates
  • more social templates for other providers

Introduction to Tomcat 7 #s2gx

Mark Thomas, SpringSource

  • Tomcat 7 Supports …
    • Servlet 3.0
    • JSP 2.2
    • EL 2.2
    • Java 1.6
  • New major release of Tomcat every time the spec has a major change
  • Servlet 3.0
    • asynchronous processing
    • pluggability
    • annotations
    • session management
    • miscellaneous
  • Asynchronous processing
    • request processing is synchronous, but the response processing can now be asynchronous
    • outline
      • start asynch processing
      • request/response passed to a new thread
      • container thread returns to the pool
      • new thread does its work
    • allows container threads to be used more efficiently
      • when waiting for external resources
      • when rationing to a resource
      • or any other time when the container thread would be blocking
    • allows separation of request and response
      • chat applications
      • stock tickers
    • all filters, servlets, and valves in the processing chain must support asynchronous processing
    • not as asynchronous as COMET
  • pluggability
    • purpose was to improve developer productivity–worry less about application configuration
    • annotations
    • web fragments
    • static resources in JARs
    • programmatic configuration options
    • pros
      • development can be faster
      • apps can be more modular
    • cons
      • fault diagnostics are significantly hampered
      • might end up enabling things you don't want or need
    • overall, I don't recommend using it for production
    • instead:
      • get tomcat to generate the equivalent web.xml
      • use the equivalent web.xml instead
    • can be frustrating to figure out what's going on when the application is doing things that aren't in web.xml
    • JARs can contain their own web.xml
    • allows JARs to be self-contained
    • JARs can also contain static resources
      • always used, cannot be excluded by fragment ordering
      • non-deterministic if there are duplicate reosurces in multiple JARs
  • annotations
    • servlets, filters, listeners
      • can be placed on any class
      • tomcat has to scan every class on application start
    • JARs scanned if included in fragment ordering
      • can exclude JARs from the scanning process; controlled in catalina.properties
    • security, file upload
      • placed on servlets
      • processed when class is loaded
    • file upload has almost–but not quite–the same API as Commons File Upload
      • don't have to ship commons file upload with your apps anymore
    • with annotations the configuration can become a lot more opaque
    • can turn all of this off in your main web.xml–turn off metadata complete
      • this is all or nothing–can't pick and choose what bits you want on or off
  • programmatic configuration
    • allows a subset of things you can do in we.xml
      • add servlets, filters, and listeners
      • change session tracking
      • configure session cookies
      • configure security
      • set initialization parameters
    • allows greater control / optional configuration
    • some environment-specific settings
    • can make troubleshooting difficult–no xml to refer to in order to see what's going on
    • main advantage is doing things like if/thens in your configuration which you can't do in web.xml
  • servlet 3.0 – session tracking
    • adds tracking via ssl session id
      • must be used on its own
    • allows selecting of supported tracking methods
      • url, cookie, ssl
    • url based tracking is viewed as a security risk
      • can't turn this off in servlet 2.2, but can turn it off in servlet 3.0
      • another release of tomcat 6 will likely allow this to be turned off
    • session id is cryptographically secure — can't be spoofed
  • servlet 3.0 – session cookies
    • can control default parameters for session cookies
      • name – may be overridden by tomcat
      • domain – may be overridden by tomcat
      • path – may be overridden by tomcat
      • maxage
      • comment
      • secure – may be overridden by tomcat
      • httponly – may be overridden by tomcat
  • servlet 3.0 – misc
    • httpOnly
      • not in any of the specs
      • however, widely supported
      • prevents scripts accessing the cookie content
      • provide a degree of xss protection
    • programmatic Login
      • useful when creating a new user account
      • can log the user in without redirecting them to the login page
      • allows the application to trigger a login
  • jsp 2.2
    • propery group changes
    • can specify default content type in jsp-config
    • can specify the buffer size for a page
    • new feature – error-on-undeclared-namespace
      • e.g. if you have a typo when using a tag library it fails silently
      • with error-on-undeclared-namespace turned on, error is thrown at compile time
    • jsp:attribute adds support for the omit attribute
  • ESL 2.2
    • now possible to invoke methods on a bean
    • correctly identifying the intended method is tricky
    • likely to be some differences between containers–spec if unclear on behavior
    • tomcat tries to do what the java compiler does
  • other tomcat 7 changes: management
    • add the ability to fix the remote jmx ports
      • previously jmx picked a port at random
    • single line log formatter
    • manager app can distinguish between primary, backup, and proxy sessions (for clusters)
    • aligned mbeans with reality (GSoC 2010)
    • general improvements to JMX support
      • can now have a server.xml with just a <Server …/> element and create a fully working Tomcat instance (Hosts, Contexts, etc. all via JMX)
        • can't save this config out but that's being worked on
  • performance
    • unlikely to see a big change
    • can limit the number of JSPs loaded at any one time
      • useful for development
    • not many areas where tomcat needs a big performance boost
  • security
    • generic CSRF protection
      • if you go to a site with malicious code, might trigger your browser to make a call to the tomcat manager to deploy an app that gives access to your machine
      • now the manager looks for a token that was passed from the previous response to the manager app and if the token doesn't exist, the request will fail
    • separate roles for manager and host manager apps
    • session fixation protection
      • changes session ID on authentication
    • enable the LockOutRealm by default (e.g. lock out user for 10 minutes after 5 failed login attempts)
    • enable an access log by default
    • added ability to disable exec command for SSI
  • code cleanup
    • use of generics throughout
    • removed deprecated and unused code
    • reduced duplication, particularly in the connectors
    • better definition of the lifecycle interface
    • added checkstyle to the build process
    • if you've written your own custom tomcat components, you might need to change them for tomcat 7
  • extensibility
    • added hooks for rfc66 – used by virgo
    • refectored to simplify geronimo integration
    • significantly simpler embedding
  • stability
    • builds on tomcat 6
    • tomcat 6 is already very stable
    • significant reductions in the open bug count
      • 6 open bugs without patches when i wrote this slide
      • for tomcat 5.5.x, 6.0.x, and 7.0.x combined
    • added unit tests
      • CI using BIO, NIO, and APR/native on every commit
    • memory leak detection and prevention
      • back-ported to tomcat 6
  • flexibility
    • copying of /META-INF/context.xml is now configurable — can control whether or not the expansion/copying of this file happens
    • alias support for contexts
      • map external content into a web application
      • keeps tomcat from deleting things in a symlink when the app is undeployed
    • shutdown address is now configurable
      • deliberately limited to localhost by default
    • tomcat equivalent of some httpd modules
      • mod_expires
      • mod_remoteIP
  • tomcat 7 status
    • passes servlet 3.0 TCK with every combination of connectors
    • passes jsp 2.2 TCK
    • passes EL 2.2 TCK
    • all with the security manager enabled
    • note that just because it passes the TCK doesn't necessarily mean it's fully compliant
    • 7.0.4 just released today
  • when will tomcat 7 be stable?
    • when three +1 votes come from committers
    • in practice the committers each have their own criteria
    • i'm looking for 2-3 releases with …
      • no major code changes that might cause regressions
      • tcks all pass (already have this)
      • no major bugs reported
      • good levels of adoption (already have this)
  • tomcat 7 plans
    • one release every month
      • bug 49884 put a spanner in the works
    • stable by the end of the year?
    • keep on top of the open bugs
    • work on bringing the open enhancement requests down
    • if all goes well, 7.0.6 will be the stable release
    • jsr 196 implementation?
      • authentication SPI for containers
      • geronimo has most (all?) of this already
    • windows authentication
      • looking unlikely — too much baggage
        • needs some native libraries for it to work well
      • waffle project already does this
    • simpler jndi configuration for shared resources
      • no more <ResourceLink … />
    • more jmx improvements
    • further improvements to memory leak protection
    • continue migration from valves to filters
    • java ee 6 web profile
      • no interest so far from user community
      • had more questions from journalists than users
      • no plans at present
      • adds a lot of baggage that isn't that useful
      • if you want a web profile implementation, there's geronimo
  • useful resources
  • new feature — rolling update/side-by-side deployment
    • can deploy a new version while the app is running and when a user's session expires, they hit the new version of the app
    • came out of a tc server requirement but made more sense to implement it in Tomcat
    • springsource providing patch to ASF and will be part of a future tomcat release
    • deploy a new WAR with the same name as an existing app, but add ##N at the end of the war file name where N is the version (e.g. myapp##1.war will be a new version of myapp.war)
      • context path is retained, meaning context path is the same for both versions of the app
    • feature that will be added is when no more sessions are active on the old version it will be automatically undeployed