Using “Online Accounts” With Ubuntu 12.04

One of the features I really like on Linux Mint is the “online accounts” that you can enable by clicking on your user name on the top right of the screen, which lets you integrate your Google (and other online account) contacts and calendar with the native OS applications.

I’ve been trying Ubuntu 12.04 RC2 on my System76 Lemur Ultrathin to see if I want to use it instead of Mint 12 on my main work machine once I get a new System76 Serval to replace my (still excellent!) three-year-old Serval, and I was a bit disappointed to see that the online accounts feature doesn’t show up as an option when you click on your user name.

I was happy to discover tonight that it’s in Ubuntu 12.04, just in a different place. If you go to the Contacts application you can add your online account from there.

Contacts seem to work fine, but what I don’t yet see is how the calendar works (or doesn’t work), and honestly I’m starting to think maybe the calendar functionality I’m seeing in Mint is by virtue of Evolution, which doesn’t come preloaded on Ubuntu.

Having my calendar accessible right from the OS and having it pop up alerts is pretty darn handy so I’ll have to dig into that further and see what my options are. I’m sure there’s plenty of calendar applications available but the seamless integration is one thing I do like about Linux Mint.

If any of you Ubuntu users out there have suggestions on this I’m all ears.

CFML XMLTransform() and Character Encoding

Quick tip on using CFML’s XMLTransform() — if you see fun weird characters in the output of the transformation like  and you’ve checked to make sure the response headers from the web server are correctly returning UTF-8, you probably just need to specify the charset of the CFFILE operations when you read the XML and XSLT files from disk.

In my case I was seeing non-breaking spaces being rendered as   which outputs a capital ‘A’ with a circumflex before the non-breaking space. At first I thought maybe the response from the web server was ISO-8559 for some reason instead of UTF-8 but after verifying that was correct, adding charset=”utf-8″ to the CFFILE tags that read the XML and XSLT files from disk, all was right with the world.

Detecting Date Range Conflicts

I’m working on an application for which one of the requirements is to not allow double-booking of rooms. Events in the system each have a start and end date and time, and when a new event is saved the system needs to tell the user if there are any overlaps with existing events in the same room.

This seems simple enough on the face of it but once I started thinking about all the possibilities around this I realized it was a lot more complex than I had initially thought. After some good old-fashioned “sledge hammer approach to get it working and to help gain understanding that will hopefully lead to eventual refinement” I think I have it licked.
I’m sure this is one of those classic problems that I just haven’t had to deal with before which are always fun to think through, and whenever I run into one of these I resist the urge to search for a solution until I’ve wrapped my head around the problem and am ready to admit defeat. (And I really try never to admit defeat unless time constraints force me to.)
My first phase on solving this problem was to consider all the possible conflict states, which in plain english are:
  1. Since an event is assumed to have a non-zero duration, if either the start date/time or end date/time is exactly the same as the start date/time or end date/time of another event in the same room, that indicates a conflict. Note that one event’s start date/time can be the same as another event’s end date/time.
  2. If an event has a start date/time that is between the start and end date/time of another event, that indicates a conflict.
  3. If an event has an end date/time that is between the start and end date/time of another event, that indicates a conflict.
  4. If an event’s start date/time is after that of another event but its end date/time is before that of another event, that indicates a conflict.
Granted some of these overlap, are redundant, or are the inverse of one another, but it was helpful as a first pass to simply think through all the scenarios to start forming a picture in my head of the various possibilities.
I’ll spare you the messy middle step here and just say I then started coding all these scenarios (and anything else I thought of) and as I went through that exercise, I realized that this all boils down to some pretty simple logic.
Assume that we have two events and each one has a start and end date/time. We’ll use start1 and end1 for the first event’s dates, and start2 and end2 for the second event’s dates. Here’s what I came up with after a lot of head banging that I believe handles all the scenarios:

Consider yourself lucky I spared you the big hairy mess I had before I arrived at that solution. I believe that covers all the bases, however, and at least in the testing I did it certainly seems to.

The only other wrinkle in the case of this system is making sure that an event itself isn’t detected as a conflict if someone updates the event and either doesn’t change the dates or changes the dates in such a way that it would be considered a conflict with that event’s state that’s already in the database. To handle that case I still run the function to detect conflicts but if I only get back 1 and the ID is the same as the one I’m trying to save, I ignore it.

So that’s how I spent more time than I care to admit this weekend. I’m curious if other people have solved this differently, and definitely would love to hear if this won’t address some scenario I didn’t consider.

Setting Up Jenkins to Deploy CFML Applications as WAR Files

I finally got my act together a couple of weeks ago and set up a Jenkins server so we could start auto-deploying our applications to staging servers. Since we’re doing agile/scrum on our projects now the product owners tend to like to see changes as they happen and we also have dedicated testers involved with some of our projects, so automating deployment to a staging server is saving us a lot of time and headaches.

We run all our CFML applications on OpenBD and Tomcat and for the most part deploy applications as self-contained WAR files, so the deployment steps in this environment are:

  1. Get latest code from Subversion trunk (we use trunk for ongoing development and branch for releases)
  2. Create WAR file
  3. Transer WAR file to target server for deployment
Pretty simple. I should note at this point that I will not be covering incorporating unit tests into the build/deploy process both because I want to focus only on the Jenkins stuff for this post, as well as because that aspect of things is covered quite well elsewhere. (And I’ll be honest: we aren’t yet doing unit testing consistently enough in our code that it can be part of our build process, but we’re working towards that.)
I also won’t cover installing Jenkins since there are many resources on that as well. In my case on Ubuntu Server it was a simple matter of adding Jenkins to sources.list, doing sudo apt-get install jenkins, and then doing a bit of Apache configuration to get up and running. You can read more about installing Jenkins on Ubuntu here, and if you have specific questions about that aspect of things I can answer I’m happy to try.

Step 1: Create an Ant Build Script

As for the specifics of setting this up the first step is to create an Ant script to tell Jenkins what to do when the job runs (we’ll create the Jenkins job in a bit). This is key because without a build script Jenkins doesn’t really do much, so we’ll create a build.xml in the root of our project and then when we create the Jenkins job we can tell it which target from the build script to run.
Since Jenkins “knows” about Subversion you do not have to include anything in your build script to pull the code from Subversion. So given that our applications get deployed as self-contained WAR files, all our Ant script has to do is build the WAR file from the code Jenkins pulls from Subversion.
I should clarify that even though I’m explaining the Ant script first, Jenkins actually runs the build script after it pulls code from Subversion. I’m only pointing that out so you don’t think the Ant script runs first even though I’m covering it first. Since you can specify an Ant target when you create the Jenkins job I figured I better explain that first.
Here’s a sample build script.

The script isn’t nearly as daunting as it may look so let’s walk through it.

In the properties section at the top, that’s declaring variables we’ll use later so we don’t have to hard-code those values in multiple places in the script.

The next section is the targets and these are specific tasks that can be executed. Note that these targets may have dependencies, so if you execute a target that has dependencies the dependencies will run first in the order they are declared, and then the target you specified will run.

In the case of this script we have three targets: build, war, and init. The war target depends on build, and build depends on init, so when we specify ‘war’ as our target in Jenkins later that means init will run, then build, then war, so let’s look at these in order.

The init target at the bottom does basic cleanup by deleting the directories into which the source code is dumped and where the WAR file is built so we start with a clean slate.

The build target runs two copy jobs to get the application files into the build directory, which is just a directory to temporarily hold the files that will be included in the WAR. First the build target copies all the non-image files into the build directory, and then it copies all the image files into the build directory.

The reason for doing this in two steps is if you copy plain text and image files in the same copy job, the image files become corrupted in the process. As you can see the first copy operation excludes image files and the second includes only the image files as identified by file extension in the imageFiles property declared at the top of the script. If you have other binary files in your applications that may become corrupted (note that JAR files seem unaffected by this issue …) you’ll want to add those file extensions to the property that indicates which files in your application are binary.

Also note that I’m excluding the build and dist directories, the build.xml file, the .project file that Eclipse adds to projects, and all the .svn files so those aren’t included in the build.

So at this point after init runs we have clean directories for doing our build, and then the build target copies all the files (other than those being excluded) from the root of the project into the build directory.

The last step is to create the WAR file, and this is (not surprisingly) done in the war target in the build script. Since Ant knows how to build WAR files this is pretty simple; you just point the war command to the directory where the application files are located (the build directory in this case) and tell it the target name and location of the WAR file, which we’re putting into a dist directory.

To review, what we’ll tell Jenkins to do in a minute is to run the war target (which in turn is dependent upon the init and build targets) in our build script, which will:

  1. Run the init target which deletes and recreates the build and dist directories so we start with a clean slate
  2. Run the build target which copies all the code and binary files from the root to the build directory
  3. Run the war target which creates a WAR file from the code in the build directory and puts it in the dist directory
Once you have your Ant script created, save it as build.xml in the root of your project and commit that to SVN so Jenkins will have it available when it runs the build.

Step 2: Create a Jenkins Job

With the hard part out of the way, next you’ll need to create a job in Jenkins by clicking on “New Job” in the top left of the Dashboard.
Give the job a name, select “Build a free-style software project” and click “OK.”

Step 3: Point Jenkins to Your SVN Repository

On the next screen you can configure some additional settings like whether or not to keep previous builds, if there are any build parameters you need to specify, etc., but we’ll focus on the SVN configuration for the purposes of this post.
Select Subversion under Source Code Management and enter the details about your repository. This tells Jenkins where it’s going to get the code to do the build.

Be sure to give Jenkins the full path to the appropriate directory in Subversion. For example if you build from trunk and your project name is foo, your URL would be something like http://subversionserver/foo/trunk not just http://subversionserver/foo

As a reminder, since we deploy our CFML applications as WAR files using OpenBD, our SVN repository includes not only our application’s source code but also the OpenBD engine, so this is traditional Java web application deployment. This is a great way to do things because the application is truly self-contained and all the configuration such as datasources, mappings, etc. is all in SVN. This way you can pull down the project from SVN and be up and running instantly, and it makes deployment really simple.

Step 4: Set the Jenkins Build Trigger

At this point Jenkins knows where your code is but we need to tell Jenkins what triggers the build process to run. You can do this multiple ways but in my case I simply set up Jenkins to poll SVN every 5 minutes to check for new code. Another common way to do this is to use a post-commit hook in SVN to hit a Jenkins URL that triggers the build, but polling is working well for us.

Scroll down to the Build Trigger section of the configuration screen.

Check the box next to “Poll SCM,” and then you can set the polling schedule using crontab style notation. Mouse over the ? next to the box if you need a refresher on the syntax, but in the example I have here that syntax tells Jenkins to poll SVN every five minutes to see if there are any changes in SVN. If there are changes the build will be triggered. We’ll review what happens when the build is triggered at the end of this post.

Step 5: Set the Ant Build Target

Just a couple more steps in configuring the Jenkins job. Next we need to tell Jenkins which target in build.xml to run as part of the build. Calling build.xml is kind of an implied step with Jenkins since you don’t have to explicitly tell it to look for build.xml. It’s assumed you’ll have an Ant script in the root of your project and that either the default target or a specific target will be run as part of the build process.
In the Build section of the configuration page, specify ‘war’ as the target to run from your build.xml file in the root of your project.
At this point Jenkins will:
  1. Poll SVN every 5 minutes to check for changes
  2. If there are changes, Jenkins will pull everything from SVN
  3. After everything is pulled from SVN, Jenkins will execute the war target from build.xml which will generate a WAR file that can be deployed to Tomcat (or any servlet container)
The last step is getting the generated WAR file to a target server.

Step 6: Configure the Post-Build Action to Deploy the WAR

One of the great things about Jenkins is the huge number of plugins available, and we’ll be using the SCP plugin in this final step. There are also deployment plugins for various servlet containers but since in the case of Tomcat that involves hitting the Tomcat manager and uploading the WAR over HTTP, I found SCP to be much more efficient and flexible.
After you install the SCP Plugin you need to go to “Manage Jenkins” and then “Configure System” to configure your SCP target, user name, and password. These are configured globally in Jenkins and then you simply select from a dropdown in the post-build action section of the Jenkins project configuration.
In the post-build action section of the project configuration:
  1. Check the box next to “Publish artifacts to SCP repository”
  2. Select the appropriate SCP site in the dropdown
  3. Specify the artifact to copy to the server. In our case this is the WAR file, and you specify the path and filename relative to the root of the Jenkins project. For example if you check things out from an SVN directory called ‘trunk’ and use the same directories in the Ant script above, your WAR file will be in trunk/dist/foo.war
  4. Specify the destination relative to the path you specified when you set up the SCP server, if necessary. If you specified Tomcat’s webapps directory as the root in the SCP server and all your projects live in that directory you may not need to specify anything here.
One more configuration issue to note — in the case of Tomcat you need to make sure and have the host for your application configured to auto-expand WARs and auto-deploy. This way when the WAR copy is complete Tomcat will deploy the new version of the application.

Summary

As with most things of this nature it took a lot longer to write this blog post than it will to set this stuff up. The only even remotely involved portion of all of this may be tweaking the Ant script to meet your needs but the rest of the process is pretty straight-forward.
At the end of all of this we wind up with a Jenkins job that polls SVN for changes every 5 minutes and if there are changes, this triggers the build. The build process:
  1. Pulls down changes from SVN
  2. Runs the war target in the build.xml in the root of the project, which …
    1. Dumps a clean copy of the application into the build directory
    2. Creates a WAR from the build directory and puts the WAR into the dist directory
  3. Copies the WAR file to a target server using SCP
Once the WAR file is copied to the target server, provided that Tomcat is configured to do so it will redeploy the application using the new WAR file.
There’s of course a bunch of different ways to configure a lot of this but this is working well for us. If you have other approaches or if anything I’m doing could be improved upon, I’d love to hear how you’re using Jenkins.

Google Cloud Print

One of my many first world problems stems from the fact that I have both Comcast/Xfinity and Verizon Frontier FiOS for Internet in my house. I use Comcast for all my home/entertainment junk and Frontier for work.

This is all well and good until I have to print to my network printer from my work machine, because the network printer is on the Comcast side, not to mention that I wouldn’t be able to hit it while I’m on the VPN for work anyway, and I print so infrequently that it’s definitely not worth having two printers. (I told you this was a first world problem.)
Luckily there’s a solution for this: Google Cloud Print. Google Cloud Print lets you register any printer and it then can be printed to from anywhere. If you don’t have a cloud-ready printer (which I don’t) you can follow these instructions to register any printer.
Note that this does not allow you to print from any application to this printer. You have to be printing something from Google Chrome specifically from what I can tell, but of course any document you need to print this way can first be uploaded to Google Docs and printed from there.
Handy stuff. Mark this first world problem solved.

Setting Default umask for SFTP on Ubuntu Server

Much as described in this blog post by Jeff Robbins, I have a situation where two sftp users in the same group are both uploading files to an Ubuntu 10.04 server using Dreamweaver. The issue is that by default the permissions are 755, so even though both users are in the same group, only the file owner has write permissions. Since the users need to be able to overwrite each other’s files I needed a way to have the default permissions be 775.

What is outlined in the blog post above is exactly what I was after, but for some reason on Ubuntu server if you use what is the final edit in that post:
Subsystem sftp /usr/lib/openssh/sftp-server -u 0002
That results in “Connection closed” messages when you try to log in. The solution above that one works, just note the minor modification of pointing to /usr/lib/… instead of /usr/libexec/…
Subsystem sftp /bin/sh -c โ€˜umask 0002; /usr/lib/openssh/sftp-serverโ€™
Restart ssh and you should be in business.
Thanks to Jeff for that very helpful blog post, and to Thad Meyer for pointing it out to me just last week (coincidentally enough).

How to Move Your Posterous Blog to WordPress or Blogger

This may be premature since the fate of Posterous is still unclear after being bought by Twitter, but I decided given the uncertainty and since moving to Blogger was something I had been considering for a while anyway this was a good time to move everything off of Posterous.

I really do like Posterous and I think they do a lot of things about 1000 times better than any other blogging service. But if nothing else this points out yet again the fact that if you use a service of any kind you’re at the mercy of that service or any company who may buy that service, so there’s a lot to be said for controlling your own stuff.

As you may have read in the announcement to which I linked above Posterous is saying they’ll provide tools to export your content to other services in the coming weeks, but not being one to wait around and see what happens I decided to get a jump on things and move some things to WordPress, and some others to Blogger, so I thought I’d share how I went about doing this and see if maybe there are better ways that I may have missed.

The interesting thing is WordPress is involved as a go-between even if you want to move to Blogger, but here’s how I went about moving off of Posterous.

Moving From Posterous to WordPress

WordPress provides a Posterous importer so this is actually pretty easy with a few caveats.

If you have self-hosted WordPress and add the Posterous plugin, it’s apparently very common to have it not work. I ran into this with the blog I was trying to move (specifically the ColdFusion Weekly podcast archives) from Posterous to a WordPress install on DreamHost.

Why DreamHost? I’ve used them in the past (been years though) for side projects now and then and especially for the money I’ve always had good luck with them, and since they offer unlimited disk space, bandwidth, databases, etc. for $100/yr I figured what the heck. I have a few small sites to move from Posterous to something else and this seemed like a good fit for these sites, especially given that the podcast files take up a chunk of disk space.

After I got DreamHost set up and used their one-click installer to install WordPress, I added the Posterous importer and tried to import the ColdFusion Weekly blog. No dice. I kept getting 403 errors which is apparently very common if you use the Posterous importer plugin on anything but wordpress.com.

The solution is to create a free blog on wordpress.com, run the Posterous importer from there, and then you can export from wordpress.com to WordPress’s WXR XML format, and use that to import to your final destination.

Here’s the steps:

  1. Create an account on wordpress.com if you don’t already have one
  2. Create a new blog (the free one is fine)
  3. Click on “Tools” and then “Import” on the left-hand side
  4. Click on “Posterous” in the list of import tools that comes up
  5. Enter your Posterous login information and the URL for your Posterous site and click submit
  6. Wait ๐Ÿ™‚
If you have a ton of posts it can take a while to import but they send you an email when it’s done.
Once the import to wordpress.com is done, you:
  1. Click on “Tools” and then “Export” on the left-hand side
  2. Leave the “All Content” radio button checked if you want everything (or choose what you want to export) and click “Download Export File”
  3. Go to the WordPress install where you want to import the blog content
  4. Click on “Tools” and then “Import” on the left-hand side
  5. Click on “WordPress”
  6. Select the export file you downloaded from wordpress.com and click “Upload file and import”
Then you’re done, with some caveats.
What I ran into with the CF Weekly podcast blog is that Posterous didn’t have the podcast files as attachments, so the link to each audio file showed up at the bottom of each post and was still pointing to posterous.com. That’s no good.
To resolve that I used Firefox and the DownThemAll plugin to grab all the podcast files, and then SFTPd those up to DreamHost. At that point I did have to go through and add a link to each file in each post. This was relatively time consuming–maybe if Posterous provides real export tools at some point this won’t be necessary. Also since WordPress (at least the install on DreamHost) has a file upload size limit of 7MB I had to do this manually and then they don’t show up in the WordPress media library, so it was a bit of a hassle.
Main point here is once you import everything make sure and check the URLs for things like images, etc. so that they’re not still pointing to Posterous and disappear if/when Posterous goes away or you decide to shut down your Posterous site.

Moving From Posterous to Blogger

Moving from Posterous to Blogger is a very similar situation since as I said to do a bulk import/export, at least from what I can tell, you have to use WordPress as an intermediary. The only other thing with Blogger is that Posterous does let you autopost from Posterous to Blogger so in theory if you set that up you could just auto-post each post from Posterous to Blogger, but note that Blogger does have an hourly limit on that so probably not the best best (not to mention there’s no way to auto-post everything at once).
To get a blog from Posterous to blogger I did this:
  1. Follow the steps above to create a new (again, free is fine) blog on wordpress.com and import your Posterous blog to WordPress
  2. Follow the steps above to export and download your WordPress WXR file
So at this point you have your Posterous blog as a WXR file. Blogger doesn’t let you import this, but thankfully there’s a tool called wordpress2blogger that handles this quite nicely.
If your WXR file is < 1 MB in size (and you don’t care about uploading the contents to a third party), you can upload your WXR file directly into wordpress2blogger and this will create and download an XML file that can be imported into Blogger.
If your WXR file is > 1MB in size (which mine for my main blog was), you can grab the code for wordpress2blogger and run it on your local machine. On Linux it wasn’t bad to get this up and running at all since all the necessary Python stuff is already installed, but I can’t speak to how much of a pain this would be on other platforms. I used the tool running locally on my main blog and it worked great.
Regardless of how you handle this (and there may be other tools out there), once you have your Blogger-format XML file you then:
  1. Log into Blogger and select (or create, if it doesn’t exist) the blog to which you want to import your content
  2. Click on “Settings” and then “Other” on the left-hand side
  3. Click on “Import Blog,” select your XML file, and hit submit
One thing to note is that at least in the case of a couple of blogs I’ve done, even though there’s a checkbox to automatically publish imported posts, that didn’t seem to work for me. So after the import is complete, you may have to go to Blogger’s “all posts” page, hit the checkbox at the top of the list to select all your posts, and then click “Publish” to actually publish them.
Caveat here again is to check your asset paths. One of the blogs on which I used this method is the OpenCF Summit blog, and image paths are pointing to wordpress.com. Since that’s free I guess it’s not a big deal but it’s definitely not self-contained so I may go back through and move all this stuff at some point.
Goes to show I should have been using something like S3 for my assets all along. ๐Ÿ™‚

Other Tools

Hopefully if/when Posterous does shut down they’ll make good on their promise to make tools that let you export easily, but I came across another tool that lets you export everything from Posterous as a zip file. I haven’t tried it, and it does cost ($9 for a single blog, $14 for multiple blogs under a single account), but this seems like a good way to just grab everything as static HTML so you at least have everything all in one place. I’m probably going to use this on my main blog as a backup even though I already have everything on Blogger (I’m mostly worried about asset paths and losing stuff on Posterous permanently) so if I do I’ll follow up and let folks know how that went.
So there you have it. If you’re more patient than I you can just wait and see what happens with Posterous, but if like me you want to jump ship now or have been thinking about doing it for a while anyway, these methods work well with the major potential issue being asset paths.
If you have other methods of doing this I’d love to hear them.

CouchDB Resources List

Since I did quite a bit of research for my post on authentication and security in CouchDB I figured I’d share what I came across as a link dump. Enjoy!

Reference Material

General Info and Tutorials

CouchDB in Government

General Case Studies

Search