The ColdFusion/CFML Discussion: We’re Finally Getting Somewhere

I’m sure by now you’ve seen Joe Rinehart’s “Dear John” video to ColdFusion, and Mike Henke has a funny and I think (as I’ll explain in detail below) very pertinent response video.

I won’t rehash everything that’s been said there as well as in the various discussion outlets the past few days, but I did want to comment on the situation by saying this: after years of tiptoeing around I think we’re finally getting somewhere.

For me, I saw the writing on the wall for Adobe ColdFusion about 5 years ago, and I was already planning to jump ship at that point for numerous reasons. Many of my reasons were technical ones (and sadly haven’t changed in ColdFusion in 5 years), but another major reason was due to my firm belief in using free software whenever possible. All that combined with me doing a lot of open source work for a closed, proprietary platform led to cognitive dissonance I could no longer ignore.

Then in 2008 OpenBD was announced as a GPLv3-licensed fork of BlueDragon. This came at exactly the right moment for me because it meant I could keep using CFML but run it on a completely free software stack.

The release of OpenBD also addressed one of the other major issues I had with ColdFusion: After seeing how free software projects are run, the level of interaction between users and developers, the ability for community members to contribute and have a direct impact on the future of the project … none of that was true with ColdFusion. I simply couldn’t keeping using and supporting something that didn’t work this way.

I quickly switched over to OpenBD and haven’t looked back. We have a couple of ColdFusion 8 servers running some apps that don’t need much attention, but we moved the majority of our applications to OpenBD (99% without issue, despite anything you’ve heard about switching engines being difficult). As our legacy (and I do mean legacy — some of these apps are 10+ years old) ColdFusion apps need updates they’re moved to OpenBD, and all of our new development is done on OpenBD. We deploy our projects as WARs to Tomcat and life as a CFML developer has never been better.

I give you that background simply to point out that in my world, Adobe ColdFusion hasn’t had anything to do with my CFML development for a many years now, and I haven’t missed a thing. In fact I’ve gained a great deal, not just a faster engine with really compelling features Adobe CF still doesn’t have, but I’ve also been able to contribute directly to OpenBD in concrete ways with patches to the engine and building the admin console, not to mention the fantastic discussions on the OpenBD mailing list that lead directly to new features in the engine that are implemented in days, not years.

When I saw Joe’s video I realized I was watching it with the perspective of a disinterested bystander. He made some valid points, though as far as the installer goes “who cares” was my reaction since I think we’d all do very well to stop treating CF as if it’s an application server and treat it as what it is, which is a Java web application.

But honestly 99% of Joe’s complaints with CFML as a language are addressed in OpenBD and Railo. My reaction in a lot of cases was “they haven’t fixed that in CF yet?” but again, since I haven’t used the product for years now, it doesn’t impact me.

The major point I think Joe makes (and the one that Mike’s video makes at the end) that is tremendously pertinent to this discussion is the constant battle between “more stuff” — meaning new marquee features that demo well but don’t work for crap in the real world, or features no one cares about — vs. more, less marketing-friendly features like improved language syntax, removing the dead weight (which is hugely important), and other improvements the actual users of the product (i.e. the developers) want.

I’m glad this came up in this way because it gets to what I think is the heart of the matter: ColdFusion is a commercial product. How do you keep people buying commercial products year after year? By adding more “features.” We developers may think of better language syntax as a feature, but we’re not typically the ones with the checkbook, and == instead of eq doesn’t demo well to the suits with the money.

This is why ColdFusion is in the state it’s in, and is a great illustration of why when there’s a profit motive behind a software product of this type, you wind up with “features” that are bright and shiny and demo well to the people who don’t know any better, and you continue release after release after release to not get much in the way of the actual improvements developers need in order to keep using ColdFusion.

To be blunt, for a commercial product that’s been around for years ColdFusion should be much, much better than it is. The fact that it isn’t speaks volumes.

There’s a reason there are no other commercial products along the lines of ColdFusion in the world: because the market can’t support them. Allaire/Macromedia/Adobe got in early with enough customers to keep this going for a while longer, but there is a definite sense that they’re de-emphasizing CF as a product (I’ll stop short of saying they’re putting it out to pasture).

Based on discussions with other developers as well as my own recent experience, this “de-emphasis” is the story more and more people are hearing from Gartner these days. ColdFusion hasn’t fallen into the “Migrate” bucket of their “Invest/Maintain/Migrate” spectrum, but it’s getting there, and Gartner flat-out said on a call I was on just last week that they do not recommend starting new development on ColdFusion if you know it’s a strategic product you’re going to be maintaining long-term.

The bigger problem here is the increasing frustration of CFML developers. Once the community starts bleeding developers, impressing the suits or anything Gartner shows on a chart or graph won’t matter. If the suits can’t find anyone who knows the technology, and they’re hearing from analysts it’s not the way to go, they’ll move to something else. All the shiny new features in the world won’t fix that.

How this relates to the free software engines is also interesting, because the other engines have the albatross of Adobe CF compatibility around their necks. In many, many cases on the OpenBD side we look at how something works in Adobe CF and the only reaction a logical person could possibly have is “WTF,” and in other cases we have ideas for changes that would mean vast improvements in speed or functionality, but we’re saddled with remaining compatible with Adobe CF. It’s a continually frustrating fine line, and given the state of ColdFusion it’s one I’m personally seeing as less important to continue to walk.

I didn’t wind up where I thought I was going to when I started writing this, but my main point as I state in the title of this post is this: we’re finally getting somewhere with the discussions. For far too many years there’s been nothing but infighting, people forming camps, alliances, cliques, etc. and getting behind one engine or another, all to our collective detriment. Ultimately that’s counterproductive and wastes the incredibly limited resources we have as a community.

We also need to stop beating around the bush. I’m as guilty of this as anyone simply because of the vitriol I’ve had thrown my way over the years, particularly immediately after I quit as an Adobe Community Professional and joined the OpenBD Steering Committee. You start asking yourself if it’s worth the hassle to say anything.

But if I can’t state my opinion on things as truthfully and hopefully respectfully (without watering things down to the point of being meaningless) without getting a purely emotional reaction from people who choose to stick their heads in the sand, that’s their problem, not mine. Just because I don’t share your opinion doesn’t mean I’m spreading FUD, or being nasty, or anything along those lines. We all need to realize that unless we can have these sorts of discussions without screaming at each other irrationally we aren’t going to make any progress.

Regardless of our engine of choice we can all benefit from improvements to the CFML language and the underlying and supporting technologies, and I’ll say flat-out here that I don’t see any of those sorts of innovations — the kinds of innovations we as developers need — coming from Adobe. They by definition have completely different motivations and to keep CF going they need to make decisions for what from my perspective are all the wrong reasons. You don’t wind up with something that’s good for developers that way.

Look around the development world. There is not a single product remaining in the world in the same basic category as ColdFusion that you have to buy. Prior to the free software engines coming along, unless you count .NET (which is a completely different, possibly more subtle argument), CFML was the only pay-to-play language out there. (And please don’t say “Websphere” or anything along those lines — that’s not the same type of product at all. Adobe convinced us for years that CF is an app server. It’s not, and they’ve been trying to fool people into thinking it is for far too long.)

I’ve been in the CFML world now for a very long time. I’ve been hearing the “but CF pays for itself!” arguments for 15 years now. I even believed those arguments at one point and you know what? It doesn’t matter. We lost. That ship has sailed. We would do ourselves and our community a huge favor by not pretending those tired old arguments are still worth the breath it takes to utter them.

People don’t pay for this stuff anymore, nor should they. There are far, far too many excellent free software solutions in the world — many of which are rolled right into Adobe ColdFusion, by the way — for us to keep thinking we have some sort of lock on productivity or amazing features or whatever the hell other arguments we used to use to try and convince the naysayers. If we’re still talking that same old crap, it’s quite clear we’re only trying to convince ourselves at this point.

That’s not to say it’s all doom and gloom. I wouldn’t still be here if I didn’t think CFML was a great technology. I wouldn’t be writing this blog post, or be spending time on the Open CFML Foundation, OpenBD, Open CF Summit, and all the other CFML-related things I do if I didn’t think the language was worth perpetuating (OK, saving).

The bottom line is this: painful as all of this may be to hear for some people, we’re finally — after years and years of ignoring our problems — getting somewhere. Regardless of the outcome of all these discussions and any casualties that may occur along the way, that’s only a good thing.

If you’re thinking about “leaving” CFML as Joe did I can’t say I blame you. There are a lot of great tools out there and it’s in your best interest as a developer to try them. Adding more tools to your toolbox only makes you more aware of the broader scope of the technology world which is a great way to expand your skills and your mind, not to mention make yourself more marketable.

I love Groovy and Grails, and still use Grails from time to time. I’d be lying if I said I hadn’t thought about switching to Grails full time. There’s a lot of great technologies out there and a lot of very compelling reasons to jump ship. Some days sticking with CFML seems downright irrational in the face of all the arguments to the contrary.

But, something keeps us in the CFML world. Any one of us is more than capable of learning another technology, but we stick around for some reason, and for me that reason is even after all I’ve seen in the technology world, CFML is still after all these years a great technology for web development, and it still stands up pretty respectably against anything that’s come along in the interim.

Could it use improvement? Sure. What couldn’t? And that’s kind of my point.

Rather than dumping CFML for another technology, I’d hope people would get fired up and start asking how they can help improve CFML. If you have ideas about what you’d like to see in CFML the free software engines would love to hear them, and you’ll be surprised at how quickly many of these ideas would happen. If you’re happy with Adobe CF, great. Keep using it. But if Adobe CF isn’t giving you what you need, you don’t need to wait for Adobe to make things happen.

There’s no technical reason why anything that’s done in any other technology (within reason of course) couldn’t be done with CFML. All we need are the voices to guide CFML’s future and the will to make it happen.

Prerequisites for CFML on Tomcat Deep Dive at cf.Objective()

I just added the prerequisites for the Tomcat Deep Dive I’m doing at cf.Objective() this year to the session description page, but figured I’d summarize here as well in case people don’t notice it over there.

This is designed as a “bring your own laptop” session and we’ll actually be installing and configuring Tomcat, multiple CFML engines, and web server connectivity in the session, but it would be VERY helpful if you grab all the downloads ahead of time since the wireless can be sketchy at conferences. Also some of the downloads are large and that will not only eat up bandwidth but many hotels cut off downloads after they hit a certain size.

So here’s the short list:

  • Java. Not just the stuff that ships with ColdFusion if you already have that installed, but a plain old JDK. (Don’t worry, it won’t conflict with anything ColdFusion-specific you already have installed.) Java 7 should work but to be safe grab the latest Java 6 (which at the moment is 1.6.0_32).
  • Apache Tomcat version 7. Grab the .tar.gz for GNU/Linux or OS X, or the appropriate .zip for Windows. I’d say don’t grab the service installer for Windows for these purposes, but if you want to install Tomcat as a service on your machine that’s fine too.
  • OpenBD. Grab the “J2EE Standard WAR” (any version)
  • Railo. Grab the “Railo Custom – WAR Archive” (any version)
  • ColdFusion. Anything version 8 or above will work (probably even older versions), including the CF 10 Beta, but main point here is you need the actual ColdFusion installer. So if you have ColdFusion installed on your machine already and don’t have the installer, that won’t work. To use ColdFusion in this context you’ll be running the installer and generating a WAR file (and feel free to generate the WAR file ahead of time if you already know how to do this).
  • Apache Web Server, or if you’re on Windows with IIS 7 we’ll go over that as well. I won’t be discussing IIS 6, both because it’s horrendously more painful than 7 and also because it’s ancient. Note that Apache runs on any platform so even if you’re on Windows, or if you’re on Windows with IIS 6, grab Apache.
Hope to see you there! It’s going to be a lot of fun. Well, geek fun anyway.

CFML XMLTransform() and Character Encoding

Quick tip on using CFML’s XMLTransform() — if you see fun weird characters in the output of the transformation like  and you’ve checked to make sure the response headers from the web server are correctly returning UTF-8, you probably just need to specify the charset of the CFFILE operations when you read the XML and XSLT files from disk.

In my case I was seeing non-breaking spaces being rendered as   which outputs a capital ‘A’ with a circumflex before the non-breaking space. At first I thought maybe the response from the web server was ISO-8559 for some reason instead of UTF-8 but after verifying that was correct, adding charset=”utf-8″ to the CFFILE tags that read the XML and XSLT files from disk, all was right with the world.

Detecting Date Range Conflicts

I’m working on an application for which one of the requirements is to not allow double-booking of rooms. Events in the system each have a start and end date and time, and when a new event is saved the system needs to tell the user if there are any overlaps with existing events in the same room.

This seems simple enough on the face of it but once I started thinking about all the possibilities around this I realized it was a lot more complex than I had initially thought. After some good old-fashioned “sledge hammer approach to get it working and to help gain understanding that will hopefully lead to eventual refinement” I think I have it licked.
I’m sure this is one of those classic problems that I just haven’t had to deal with before which are always fun to think through, and whenever I run into one of these I resist the urge to search for a solution until I’ve wrapped my head around the problem and am ready to admit defeat. (And I really try never to admit defeat unless time constraints force me to.)
My first phase on solving this problem was to consider all the possible conflict states, which in plain english are:
  1. Since an event is assumed to have a non-zero duration, if either the start date/time or end date/time is exactly the same as the start date/time or end date/time of another event in the same room, that indicates a conflict. Note that one event’s start date/time can be the same as another event’s end date/time.
  2. If an event has a start date/time that is between the start and end date/time of another event, that indicates a conflict.
  3. If an event has an end date/time that is between the start and end date/time of another event, that indicates a conflict.
  4. If an event’s start date/time is after that of another event but its end date/time is before that of another event, that indicates a conflict.
Granted some of these overlap, are redundant, or are the inverse of one another, but it was helpful as a first pass to simply think through all the scenarios to start forming a picture in my head of the various possibilities.
I’ll spare you the messy middle step here and just say I then started coding all these scenarios (and anything else I thought of) and as I went through that exercise, I realized that this all boils down to some pretty simple logic.
Assume that we have two events and each one has a start and end date/time. We’ll use start1 and end1 for the first event’s dates, and start2 and end2 for the second event’s dates. Here’s what I came up with after a lot of head banging that I believe handles all the scenarios:

Consider yourself lucky I spared you the big hairy mess I had before I arrived at that solution. I believe that covers all the bases, however, and at least in the testing I did it certainly seems to.

The only other wrinkle in the case of this system is making sure that an event itself isn’t detected as a conflict if someone updates the event and either doesn’t change the dates or changes the dates in such a way that it would be considered a conflict with that event’s state that’s already in the database. To handle that case I still run the function to detect conflicts but if I only get back 1 and the ID is the same as the one I’m trying to save, I ignore it.

So that’s how I spent more time than I care to admit this weekend. I’m curious if other people have solved this differently, and definitely would love to hear if this won’t address some scenario I didn’t consider.

Setting Up Jenkins to Deploy CFML Applications as WAR Files

I finally got my act together a couple of weeks ago and set up a Jenkins server so we could start auto-deploying our applications to staging servers. Since we’re doing agile/scrum on our projects now the product owners tend to like to see changes as they happen and we also have dedicated testers involved with some of our projects, so automating deployment to a staging server is saving us a lot of time and headaches.

We run all our CFML applications on OpenBD and Tomcat and for the most part deploy applications as self-contained WAR files, so the deployment steps in this environment are:

  1. Get latest code from Subversion trunk (we use trunk for ongoing development and branch for releases)
  2. Create WAR file
  3. Transer WAR file to target server for deployment
Pretty simple. I should note at this point that I will not be covering incorporating unit tests into the build/deploy process both because I want to focus only on the Jenkins stuff for this post, as well as because that aspect of things is covered quite well elsewhere. (And I’ll be honest: we aren’t yet doing unit testing consistently enough in our code that it can be part of our build process, but we’re working towards that.)
I also won’t cover installing Jenkins since there are many resources on that as well. In my case on Ubuntu Server it was a simple matter of adding Jenkins to sources.list, doing sudo apt-get install jenkins, and then doing a bit of Apache configuration to get up and running. You can read more about installing Jenkins on Ubuntu here, and if you have specific questions about that aspect of things I can answer I’m happy to try.

Step 1: Create an Ant Build Script

As for the specifics of setting this up the first step is to create an Ant script to tell Jenkins what to do when the job runs (we’ll create the Jenkins job in a bit). This is key because without a build script Jenkins doesn’t really do much, so we’ll create a build.xml in the root of our project and then when we create the Jenkins job we can tell it which target from the build script to run.
Since Jenkins “knows” about Subversion you do not have to include anything in your build script to pull the code from Subversion. So given that our applications get deployed as self-contained WAR files, all our Ant script has to do is build the WAR file from the code Jenkins pulls from Subversion.
I should clarify that even though I’m explaining the Ant script first, Jenkins actually runs the build script after it pulls code from Subversion. I’m only pointing that out so you don’t think the Ant script runs first even though I’m covering it first. Since you can specify an Ant target when you create the Jenkins job I figured I better explain that first.
Here’s a sample build script.

The script isn’t nearly as daunting as it may look so let’s walk through it.

In the properties section at the top, that’s declaring variables we’ll use later so we don’t have to hard-code those values in multiple places in the script.

The next section is the targets and these are specific tasks that can be executed. Note that these targets may have dependencies, so if you execute a target that has dependencies the dependencies will run first in the order they are declared, and then the target you specified will run.

In the case of this script we have three targets: build, war, and init. The war target depends on build, and build depends on init, so when we specify ‘war’ as our target in Jenkins later that means init will run, then build, then war, so let’s look at these in order.

The init target at the bottom does basic cleanup by deleting the directories into which the source code is dumped and where the WAR file is built so we start with a clean slate.

The build target runs two copy jobs to get the application files into the build directory, which is just a directory to temporarily hold the files that will be included in the WAR. First the build target copies all the non-image files into the build directory, and then it copies all the image files into the build directory.

The reason for doing this in two steps is if you copy plain text and image files in the same copy job, the image files become corrupted in the process. As you can see the first copy operation excludes image files and the second includes only the image files as identified by file extension in the imageFiles property declared at the top of the script. If you have other binary files in your applications that may become corrupted (note that JAR files seem unaffected by this issue …) you’ll want to add those file extensions to the property that indicates which files in your application are binary.

Also note that I’m excluding the build and dist directories, the build.xml file, the .project file that Eclipse adds to projects, and all the .svn files so those aren’t included in the build.

So at this point after init runs we have clean directories for doing our build, and then the build target copies all the files (other than those being excluded) from the root of the project into the build directory.

The last step is to create the WAR file, and this is (not surprisingly) done in the war target in the build script. Since Ant knows how to build WAR files this is pretty simple; you just point the war command to the directory where the application files are located (the build directory in this case) and tell it the target name and location of the WAR file, which we’re putting into a dist directory.

To review, what we’ll tell Jenkins to do in a minute is to run the war target (which in turn is dependent upon the init and build targets) in our build script, which will:

  1. Run the init target which deletes and recreates the build and dist directories so we start with a clean slate
  2. Run the build target which copies all the code and binary files from the root to the build directory
  3. Run the war target which creates a WAR file from the code in the build directory and puts it in the dist directory
Once you have your Ant script created, save it as build.xml in the root of your project and commit that to SVN so Jenkins will have it available when it runs the build.

Step 2: Create a Jenkins Job

With the hard part out of the way, next you’ll need to create a job in Jenkins by clicking on “New Job” in the top left of the Dashboard.
Give the job a name, select “Build a free-style software project” and click “OK.”

Step 3: Point Jenkins to Your SVN Repository

On the next screen you can configure some additional settings like whether or not to keep previous builds, if there are any build parameters you need to specify, etc., but we’ll focus on the SVN configuration for the purposes of this post.
Select Subversion under Source Code Management and enter the details about your repository. This tells Jenkins where it’s going to get the code to do the build.

Be sure to give Jenkins the full path to the appropriate directory in Subversion. For example if you build from trunk and your project name is foo, your URL would be something like http://subversionserver/foo/trunk not just http://subversionserver/foo

As a reminder, since we deploy our CFML applications as WAR files using OpenBD, our SVN repository includes not only our application’s source code but also the OpenBD engine, so this is traditional Java web application deployment. This is a great way to do things because the application is truly self-contained and all the configuration such as datasources, mappings, etc. is all in SVN. This way you can pull down the project from SVN and be up and running instantly, and it makes deployment really simple.

Step 4: Set the Jenkins Build Trigger

At this point Jenkins knows where your code is but we need to tell Jenkins what triggers the build process to run. You can do this multiple ways but in my case I simply set up Jenkins to poll SVN every 5 minutes to check for new code. Another common way to do this is to use a post-commit hook in SVN to hit a Jenkins URL that triggers the build, but polling is working well for us.

Scroll down to the Build Trigger section of the configuration screen.

Check the box next to “Poll SCM,” and then you can set the polling schedule using crontab style notation. Mouse over the ? next to the box if you need a refresher on the syntax, but in the example I have here that syntax tells Jenkins to poll SVN every five minutes to see if there are any changes in SVN. If there are changes the build will be triggered. We’ll review what happens when the build is triggered at the end of this post.

Step 5: Set the Ant Build Target

Just a couple more steps in configuring the Jenkins job. Next we need to tell Jenkins which target in build.xml to run as part of the build. Calling build.xml is kind of an implied step with Jenkins since you don’t have to explicitly tell it to look for build.xml. It’s assumed you’ll have an Ant script in the root of your project and that either the default target or a specific target will be run as part of the build process.
In the Build section of the configuration page, specify ‘war’ as the target to run from your build.xml file in the root of your project.
At this point Jenkins will:
  1. Poll SVN every 5 minutes to check for changes
  2. If there are changes, Jenkins will pull everything from SVN
  3. After everything is pulled from SVN, Jenkins will execute the war target from build.xml which will generate a WAR file that can be deployed to Tomcat (or any servlet container)
The last step is getting the generated WAR file to a target server.

Step 6: Configure the Post-Build Action to Deploy the WAR

One of the great things about Jenkins is the huge number of plugins available, and we’ll be using the SCP plugin in this final step. There are also deployment plugins for various servlet containers but since in the case of Tomcat that involves hitting the Tomcat manager and uploading the WAR over HTTP, I found SCP to be much more efficient and flexible.
After you install the SCP Plugin you need to go to “Manage Jenkins” and then “Configure System” to configure your SCP target, user name, and password. These are configured globally in Jenkins and then you simply select from a dropdown in the post-build action section of the Jenkins project configuration.
In the post-build action section of the project configuration:
  1. Check the box next to “Publish artifacts to SCP repository”
  2. Select the appropriate SCP site in the dropdown
  3. Specify the artifact to copy to the server. In our case this is the WAR file, and you specify the path and filename relative to the root of the Jenkins project. For example if you check things out from an SVN directory called ‘trunk’ and use the same directories in the Ant script above, your WAR file will be in trunk/dist/foo.war
  4. Specify the destination relative to the path you specified when you set up the SCP server, if necessary. If you specified Tomcat’s webapps directory as the root in the SCP server and all your projects live in that directory you may not need to specify anything here.
One more configuration issue to note — in the case of Tomcat you need to make sure and have the host for your application configured to auto-expand WARs and auto-deploy. This way when the WAR copy is complete Tomcat will deploy the new version of the application.

Summary

As with most things of this nature it took a lot longer to write this blog post than it will to set this stuff up. The only even remotely involved portion of all of this may be tweaking the Ant script to meet your needs but the rest of the process is pretty straight-forward.
At the end of all of this we wind up with a Jenkins job that polls SVN for changes every 5 minutes and if there are changes, this triggers the build. The build process:
  1. Pulls down changes from SVN
  2. Runs the war target in the build.xml in the root of the project, which …
    1. Dumps a clean copy of the application into the build directory
    2. Creates a WAR from the build directory and puts the WAR into the dist directory
  3. Copies the WAR file to a target server using SCP
Once the WAR file is copied to the target server, provided that Tomcat is configured to do so it will redeploy the application using the new WAR file.
There’s of course a bunch of different ways to configure a lot of this but this is working well for us. If you have other approaches or if anything I’m doing could be improved upon, I’d love to hear how you’re using Jenkins.

A Reminder of the Power of CFML Custom Tags

Yeah, they’ve been around forever, and many people forgot all about custom tags when that whole CFC thing came about, but I still absolutely love custom tags and think they are incredibly useful in the view layer of an application.

I was reminded of this today while working on Enlist, which is an open source application for managing volunteers and events. It was again the focus on the hackfest at OpenCF Summit this year and we’re pushing towards a 1.0 release before long.

One of the things that was added to the application at OpenCF Summit this year was DataTables, which is a really slick jQuery plugin that adds sorting, searching, and paging to HTML tables, and with their latest update it works fantastically well with Twitter Bootstrap.

I’m sure many of you are already familiar with DataTables but for those of you who aren’t, the way it works is you simply add a little JavaScript at the top of a page containing a table to which you want to add the DataTables functionality. Identify the table by its ID and voila, you’re done.

The trick comes in when you’re using this on several pages, especially when you’re adding a bit more to the JavaScript as far as specific functionality, placement of controls, etc. In that case what you wind up with is the same code on numerous pages with the only difference being the ID of the table  to which you’re adding DataTables, and this doesn’t give you much flexibility to do things like enable row clicking on one table but not another.

Enter the underappreciated CFML custom tag. This is a perfect use case for a custom tag, because this allows the JavaScript functionality of DataTables to be wrapped with a little CFML to add some flexibility and intelligence to what otherwise would be a lot of copy/pasta JavaScript code.

You can see the code for the custom tag on GitHub, but basically this wrapper for the DataTables JavaScript lets a table ID, table body ID, and row link be passed in, and that’s then applied appropriately.

As for using the tag on a page, it’s as simple as importing the tag library and then wrapping the HTML table with the custom tag:

<!--- datatable.cfm custom tag lives in /enlist/customtags ---> <cfimport prefix="tags" taglib="/enlist/customtags" /> ... <tags:datatable tableID="myTableID" tableBodyID="myTableBodyID">   <table id="myTableID">     <thead>       <tr>         <th>Foo</th>         <th>Bar</th>         <th>Baz</th>       </tr>     </thead>     <tbody id="myTableBodyID">       <tr>         <td>Foo!</td>         <td>Bar!</td>         <td>Baz!</td>       </tr>     </tbody>   </table> </tags:datatable>

There’s a bit more to it in its real-world use which you can see here since it does row linking as well.

Simple enough, but oh so powerful. Now I have a flexible, reusable wrapper for DataTables that I can drop into any page and customize further as needed.

So as you’re messing with all the highfalutin doo-dads we have at our disposal in CFML these days, don’t forget about the lowly custom tags because they can add a ton of flexibility and power to your view layer.

 

Installing Packages for Sublime Text 2 on Linux

I decided to give Sublime Text 2 a try on the next sprint on my current project. I’ve heard a lot of great things about it and have been impressed in the bit of messing around I’ve done thus far, and as I’ve said before although CFEclipse rocks for CFML development after using it for years and years Eclipse is just starting to feel like a lot more than I need. Eclipse is great for the Groovy and Java work that I do but for CFML I’ve been looking for something more lightweight, because for CFML work I tend to use Eclipse as a pretty basic editor and file navigator. Like most programmers I also tend to get bored and simply want to try new things once in a while.

I used emacs on the last round of updates to the OpenBD admin console. I really, really like emacs but you’re stuck with using the HTML syntax highlighting and code formatting since there’s no CFML plugin for emacs (that I could find anyway), so it falls over pretty hard if you try to do too much CFSCRIPT. I also use vim quite a lot as an editor but for full-blown project work I’ve never made the switch for whatever reason. I’m also a big fan of UltraEdit and although they do have a Linux version, it’s pretty sluggish. Hopefully that’ll get better in newer releases.
But I digress–the real point of this post is a quick tip on where to put Sublime Text packages on Linux. Not a huge thing but I figured I’d share since I did have to do a bit of hunting around. Even though Sublime Text is available for Linux (which is awesome), most of the information around this assumes you’re using either Windows or Mac.
After you extract Sublime Text 2 and run it for the first time it creates the directory ~/.config/sublime-text-2 and this is where you put your packages. You just copy the directory containing the package you want to install into ~/.config/sublime-text-2/Packages, restart Sublime Text, and you’re done.
Let’s use the ColdFusion Plugin as an example. After unzipping the plugin, you’ll copy the ColdFusion directory (the entire directory, not just the contents) into ~/.config/sublime-text-2/Packages so you’ll wind up with the directory ~/.config/sublime-text-2/Packages/ColdFusion Restart Sublime Text and if you go to View -> Syntax you’ll see ColdFusion in the list.
Note that in some of the Mac instructions I found they indicated you have to also add a symlink in ~/.config/sublime-text-2/Installed Packages that points to the directory of the package. I did that first and it works but given that all the other packages in ~/.config/sublime-text-2/Packages show up in the menus, I decided to delete the symlink and after restarting Sublime Text everything still works.
I’ll be using Sublime Text 2 hot and heavy over the next few weeks so I’ll share my experience with it. If you have any tips for a n00b or stuff that tripped you up when you first started using Sublime Text I’d love to hear them.

String Matching in CouchDB Views

We’re in the process of porting an application that has been running on SQL Server over to the fabulous and amazing CouchDB. We were originally under the impression that everyone accessing data from this application in their own code was doing so through our web service, which would have made our job pretty simple since we could swap the guts of the web service methods out and return the same data types to the caller, but upon further investigation we discovered that people had written their own custom queries directly against the database.

This alone isn’t a big deal but in some cases people were running queries that included LIKE clauses, and since we opted not to install CouchDB-Lucene given both time constraints as well as the fact that the LIKE queries against SQL Server were pretty limited in scope and number, I thought I’d share what we came up with to do string matching in views in CouchDB.

This is by no means to suggest you should not use CouchDB-Lucene if you want true full-text searching against data in CouchDB, but in our case this was an acceptable compromise.

Matching Fields That Start With a String in Couch

SQL Equivalent: “WHERE field LIKE ‘foo%'”

Let’s assume I have a database called test and in that database I have documents that have fields of firstName and lastName. I want to write a view that will let me do wildcard matches against first names that begin with a string.

This turns out to be pretty simple given how keys work in CouchDB map functions. Since a view emits a key and a value and we can use start and end keys in our calls to CouchDB, we simply provide the string against which we want to match as our start key and some end key that will ensure we don’t get back more than what we’re wanting.

For example, let’s say I want to match all documents in my database that start with ‘Mat’ so I can retrieve all people with a first name of Matt, or Matthew, or Mathew, or Mat, or Mathias … you get the idea.

First I write a view that in its map function emits firstName as the key:

function (doc) {
  if (doc.firstName && doc.lastName) {
    emit(doc.firstName, doc);
  }
}

Assume that my design document is ‘people’ and that’s the map function for a view called ‘byFirstName.’ To call that view and get back only people with a first name staring with ‘Mat’ I use the following URL:

http://couch/test/_design/people/_view/byFirstName?startkey="Mat"&endkey="MatZ"

In case that wraps poorly in the blog post display, here’s just the start and end keys:

startkey="Mat"
endkey="MatZ"

That tells CouchDB to start its output for that view with anything that starts with Mat and end once it hits anything that starts with MatZ.

Matching Specific Strings Contained in Fields

SQL Equivalent: “WHERE field LIKE ‘%KnownString%'”

We had some use cases where users had canned queries (i.e. users can’t enter random search terms) that were looking for a specific term contained anywhere within a specific field. I say specific term here and in the example I use “KnownString” because if you know the string ahead of time this is a simple problem to solve, whereas ad hoc terms are more problematic, but I’ll address that below.

Remember that within CouchDB views you have full access to JavaScript, so solving this use case is simply a matter of using a regex to match against the known term.

Let’s say I want to pull all documents that have a bio field containing the term ‘CouchDB’:

function(doc) {
  if (doc.bio && doc.bio.toUpperCase().match(/bCOUCHDBb/)) {
    emit(doc._id, doc);
  }
}

Again, since I know the term ahead of time I can do a regex match against it quite easily in my view.

Matching Ad Hoc Strings Contained in Fields

SQL Equivalent: “WHERE field LIKE ‘%adHocSearchTerm%'”

Where things get tricky in CouchDB without using something like CouchDB-Lucene is matching ad hoc strings. “Tricky” is actually putting it mildly, because the real story is you can’t do this in CouchDB. So in use cases where people had code that had a search box into which users could type anything, we had to come up with another solution.

What I’ve found as I’ve been using CouchDB more and more is that it can shift things that you used to do in the database layer up into the application layer, and vice-versa. So in this case it was simply a matter of coming up with a view that pulled back a subset of documents into the application code, and then doing the matching there.

One caveat here is that since our database contains thousands of documents, it wasn’t really feasible to pull back all the documents in the database and then perform matching in the application layer. Since these documents all have a date associated with them, what we wound up doing is using date range as start and end keys as a way of reducing the number of documents we have to match against in the application. This wasn’t a huge burden on users and certainly will improve performance.

We wound up limiting documents returned by year (i.e. the users have to choose a year in which to search), which is enough of a range to not make things too annoying for users, but is also a small enough set of documents not to kill performance on the application side.

To call the view that uses date as its key, the URL params look like this to pull back all documents for 2011 in descending date order:

?startkey="2012/01/01"&endkey="2011/01/01"&descending=true

Remember that when you order descending you essentially flip the start and end keys around, hence why 2012/01/01 is used as the start key.

Once I have the documents back, I then deserialize the JSON into something usable by CFML and then loop over the documents to do my further refinement by search term.

Leaving out the subset controlled by date I described above, assuming I wanted to find all people with a bio field that contained the search term entered by a user on a form, the code winds up looking something like this:

<cfhttp url="http://server/test/_design/people/_view/hasBio"
        method="get"
        result="peopleJSON" />

<cfset peopleReturned =
        DeserializeJSON(peopleJSON.FileContent).rows />

<cfset matchingPeople = ArrayNew(1) />

<cfloop array="#peopleReturned#" index="person">
  <cfif FindNoCase(form.searchTerm, person.value.bio) neq 0>
    <cfset ArrayAppend(matchingPeople, person) />
  </cfif>
</cfloop>

What we wind up with there is the matchingPeople array will contain only the people who had the search term included in their bio field.

The big caveat here again is that if you have a huge number of documents you can get into trouble on the application side, so make sure and limit what you get back from CouchDB since you’ll wind up looping over all of those documents to do your search term matching.

Hope that helps others do some quick and dirty LIKE type queries in CouchDB. If there’s a better way to do any of these I’m all ears!

Prerequisites For My cf.Objective() Presentation on Tomcat

Quick note to anyone planning to attend my “Running Multiple CFML Engines on Apache Tomcat” talk at cf.Objective() — even though this is only a one-hour session, with just a bit of prep work you can easily turn this into a hands-on session since I only have a few slides and it will be mostly demo. You don’t have to follow along to get a ton of great info from this session, but if you want to follow along please grab the following ahead of time:

Some additional notes:

  • You do NOT need to install Tomcat ahead of time
  • You SHOULD install Apache ahead of time
  • If you want to use Adobe CF as one of your engines, you’ll want to run the installer ahead of time and for the installation type choose “generate a WAR file” and have that available on your laptop. Note that even if you have Adobe CF installed on your machine already, you can run the installer again and generate a WAR file without affecting your existing installation.
  • For Open BlueDragon and Railo, grab the WAR files and have those handy
  • Your operating system doesn’t matter–all the Tomcat stuff is pure Java, so whether you’re on GNU/Linux, Windows, or Mac it’s all good.

If you have questions/concerns ahead of time please comment here or email me. See you at cf.Objective()!