The Definitive Guide to CouchDB Authentication and Security

With a bold title like that I suppose I should clarify a bit. I finally got frustrated enough with all the disparate and seemingly incomplete information on this topic to want to gather everything I know about this topic into a single place, both so I have it for my own reference but also in the hopes that it will help others.

Since CouchDB is just an HTTP resource and can be secured at that level along the same lines as you’d secure any HTTP resource, I should also point out that I will not be covering things like putting a proxy in front of CouchDB, using SSL with CouchDB, or anything along those lines. This post is strictly limited to how authentication and security work within CouchDB itself.

CouchDB security is powerful and granular but frankly it’s also a bit quirky and counterintuitive. What I’m outlining here is my understanding of all of this after taking several runs at it, reading everything I could find on the Internet (yes, the whole Internet!), and a great deal of trial and error. That said if there’s something I’m missing or not stating accurately here I would LOVE to be corrected.

Basically the way security works in CouchDB is that users are stored in the _users database (or elsewhere if you like; this can be changed in the config file), and security revolves around three user roles:

  • Server admin
  • Database admin
  • Database reader

Notice one missing? That’s right, there is not a defined database reader/writer or database writer role. We’ll get to that in a minute. And of course you can define your own roles provided that you write the functionality to make them meaningful to your databases.

Here’s how the three basic roles play out:

  • Server admins can do anything across the entire server. This includes creating/deleting databases, managing users, and full admin access to all databases, i.e. full CRUD on all documents as well as the ability to create/modify/delete views, run compaction and replication, etc. In short, god mode.
  • Database admins have full read/write access (including design documents) on specific databases and can also modify security settings on a specific database. (I don’t know if database admins can manage replication because I did not test that specifically.)
  • Database readers can only read documents and views on a specific database, and have no other permissions.

Even given all of this, reading and writing in CouchDB needs more clarification so you know what is and isn’t allowed:

  • By default all databases are read/write enabled for anonymous users, even if you define database admins on a database. Note that this includes the ability to call design documents via GET, but does not include the ability to create/edit/delete design documents. Once you turn off admin party you have to be a server or database admin in order to manage design documents.
  • If you define any database readers on a database anonymous reads are disabled, but anonymous writes (of regular documents, not design documents) are still enabled.
  • In order to prohibit anonymous writes, you must create a design document containing a validation function in each database to handle this (much more on this below).
  • Regardless of any other settings server admins always have full access to everything, with the exception that if you create a validation function the admin user’s access is impacted by any rules in that validation function. More on this below, but basically if you create a validation function looking for a specific user or role and the admin user doesn’t match the criteria, they’ll be blocked just like anyone else.

Blocking Anonymous Writes

So now we come to the issue of blocking anonymous writes (meaning create/update/delete), and it’s simple enough but I have no idea why this isn’t done at the user level. Maybe there’s a logical reason that isn’t written down anywhere, but why you can’t create a reader/writer user or role is a mystery to me.

But enough whining. Here’s how you do it.

To block anonymous writes you have to create a design document in the database that contains what’s called a validation function. This basically means that your design document must contain a validate_doc_update field, and the ID for this document follows the standard pattern for design documents, e.g. something like _design/blockAnonymousWrites The value of the validate_doc_update field is a function that will be run before all write operations, and it takes the new document, the old document (which would be null on create operations), and the user context in as arguments. This gives you access to everything you need to do simple things like check for a valid user, or more complex things like seeing if specific fields exist in the document that’s about to be written or even if there are conflicts on an update operation with the old version of the document that you want to reject.

Here’s a sample validation function that simply checks for a specific user name, foo, and rejects the write operation if the user is not foo:

function(new_doc, old_doc, userCtx) {   if(userCtx.name != 'foo') {     throw({forbidden: "Not Authorized"});   } }  

The userCtx object has properties of name and roles. The name property is the user name as a string, and roles is an array of role strings.

Let’s say you wanted to limit write operations to the role bar. To accomplish that you’d use JavaScript’s indexOf() method on the userCtx.roles array to see if the required role exists:

function(new_doc, old_doc, userCtx) {   if(userCtx.roles.indexOf('bar') == -1) {     throw({forbidden: "Not Authorized"});   } }  

Obviously on top of all of this you have access to all the properties of the document being posted as well as the old document if it’s a revision, and you can use all that information to do whatever additional validation you need on the document data itself before allowing the document to be written to the database.

Creating Users

As far as creating users is concerned you can either do this in Futon or (as with everything in CouchDB) via the HTTP API. Note that if you create users via Futon you need to be aware that if you are logged in as admin and click the “Setup more admins” link you’re creating a server admin. That means they have permission to do literally everything on that CouchDB server.

If you want to create a non-admin user make sure you’re logged out and click on the “Signup” link, and you can create a user that way. Note that this doesn’t work on BigCouch if you’re hitting Futon on port 5984 since the _users database lives on port 5986 in BigCouch, and that backend port is by default only accessible via localhost; more on that below. And big thanks to Robert Newson on the CouchDB mailing list for pointing that out since I was tearing my hair out a bit after my recent migration to BigCouch.

If you want to create users via the HTTP API, in CouchDB 1.2 or higher you simply do a PUT to the _users database via curl or another HTTP tool, or make an HTTP call via your favorite scripting language. I’ll show all the examples in curl since it’s language agnostic and universally available (not to mention because I find curl so damn handy).

curl -X PUT http://mycouch:5984/_users/org.couchdb.user:bob -d '{"name":"bob", "password":"bobspassword", "roles":[], "type":"user"}' -H "Content-Type: application/json"  

That will create a user document with an ID of org.couchdb.user:bob and a user name of bob, and bob is not a server admin. In CouchDB 1.2 it will see the password field in the document and automatically create a password salt and hash the password for you.

On versions of CouchDB prior to 1.2, or with servers based on versions of CouchDB prior to 1.2 such as BigCouch 0.4.0 (which is based on CouchDB 1.1.1), the auto-salt-hash bit does not happen. This means you need to salt and hash the password information and store the hashed password and the salt in the user document.

As a reminder in case you weren’t paying attention earlier: On BigCouch the _users database is on port 5986. This had me banging my head against my desk for the better part of an afternoon. It’s probably documented somewhere but you know geeks and reading manuals, so I’m sharing that important tidbit in the hopes it helps someone else.

To create a user on CouchDB < 1.2 or BigCouch 0.4.0 (which again is based on CouchDB 1.1.1) you first need to:

  • Create a salt
  • Hash the concatenation of the password and the salt using SHA1
  • Include the salt used as the salt property of your user document, and the hashed password as the password_sha property of your user document

There are numerous ways to do all of this and you can see some examples in various languages and technologies on the CouchDB wiki, but since openssl is standard and a quick and easy way to do things I’ll recap that method here.

First you need to generate a salt:
SALT=`openssl rand 16 | openssl md5`

Next echo that out just to make sure it got set properly:
echo $SALT

Next you concatenate whatever password you want + the salt, and then hash the password using SHA1:
echo -n "thepasswordhere$SALT" | openssl sha1

One caveat: if when you echo $SALT it contains (stdin) at the beginning like so:
(stdin)= 4e8096c4d0047e8d535df4b356b8d102

Make sure NOT to include the (stdin)= part in what you’re going to put into CouchDB. Ignore (stdin)= and the space that follows and use only the hex value.

After generating a salt and hashing the password the end result that you put in CouchDB looks something like this (you’d obviously replace thehashedpassword and thesalt with the appropriate values):
curl -X PUT http://mycouch:5984/_users/org.couchdb.user:bob -d '{"name":"bob", "password_sha":"thehashedpassword", "salt":"thesalt", "roles":[], "type":"user"}' -H "Content-Type: application/json"

Of course if you know when you’re creating the user that you want to grant them a specific role, you’d put that in the roles array. These roles will be contained in userCtx.roles in validation functions and you can act on that accordingly (see the above discussion about validation functions for more details).

And again note that if you’re on BigCouch use port 5986 for the _users database!

Summary

To sum all this up, here’s a handy-dandy chart.

If you want to … You need to …
  • Allow anonymous access to all functionality including creating and deleting databases
  • Do nothing! Leave admin party turned on. (At your own risk, of course.)
  • Disable anonymous server admin functionality (create/delete databases, etc.) but continue to allow anonymous read/write access (not including design documents) on all databases
  • Create at least one server admin user by clicking the “Fix this!” link next to the admin party warning on the lower right in Futon.
  • Allow a user who is not a server admin to have admin rights on a specific database
  • Create a non-server-admin user and assign them (by name or role) to be a database admin user on the specific database. This can be done via the “Security” icon at the top of Futon when you’re in a specific database, or via the HTTP API.
  • Block anonymous reads on a specific database
  • Create a non-server-admin user in CouchDB and assign them (by name or role) to be a database reader on the specific database. This can be done via the “Security” icon at the top of Futon when you’re in a specific database, or via the HTTP API.
  • Block anonymous writes on a specific database
  • Create a non-server-admin user in CouchDB and create a design document in the database that includes a validation function, specifically in a validate_doc_update property in the design document. The value of this property is a function (that you write) to check for a specific user name or role in the userCtx argument that is passed to the function, and you would throw an error in the function if the user or role is not one you want to write to the database.

And that’s more or less all I know about CouchDB security. I’ll end with some links if you want to explore further.

Any questions, corrections, or suggestions for clarification are very welcome. Hope some of you found this helpful!

Security/Validation Function Links

 

 

A Reminder of the Power of CFML Custom Tags

Yeah, they’ve been around forever, and many people forgot all about custom tags when that whole CFC thing came about, but I still absolutely love custom tags and think they are incredibly useful in the view layer of an application.

I was reminded of this today while working on Enlist, which is an open source application for managing volunteers and events. It was again the focus on the hackfest at OpenCF Summit this year and we’re pushing towards a 1.0 release before long.

One of the things that was added to the application at OpenCF Summit this year was DataTables, which is a really slick jQuery plugin that adds sorting, searching, and paging to HTML tables, and with their latest update it works fantastically well with Twitter Bootstrap.

I’m sure many of you are already familiar with DataTables but for those of you who aren’t, the way it works is you simply add a little JavaScript at the top of a page containing a table to which you want to add the DataTables functionality. Identify the table by its ID and voila, you’re done.

The trick comes in when you’re using this on several pages, especially when you’re adding a bit more to the JavaScript as far as specific functionality, placement of controls, etc. In that case what you wind up with is the same code on numerous pages with the only difference being the ID of the table  to which you’re adding DataTables, and this doesn’t give you much flexibility to do things like enable row clicking on one table but not another.

Enter the underappreciated CFML custom tag. This is a perfect use case for a custom tag, because this allows the JavaScript functionality of DataTables to be wrapped with a little CFML to add some flexibility and intelligence to what otherwise would be a lot of copy/pasta JavaScript code.

You can see the code for the custom tag on GitHub, but basically this wrapper for the DataTables JavaScript lets a table ID, table body ID, and row link be passed in, and that’s then applied appropriately.

As for using the tag on a page, it’s as simple as importing the tag library and then wrapping the HTML table with the custom tag:

<!--- datatable.cfm custom tag lives in /enlist/customtags ---> <cfimport prefix="tags" taglib="/enlist/customtags" /> ... <tags:datatable tableID="myTableID" tableBodyID="myTableBodyID">   <table id="myTableID">     <thead>       <tr>         <th>Foo</th>         <th>Bar</th>         <th>Baz</th>       </tr>     </thead>     <tbody id="myTableBodyID">       <tr>         <td>Foo!</td>         <td>Bar!</td>         <td>Baz!</td>       </tr>     </tbody>   </table> </tags:datatable>

There’s a bit more to it in its real-world use which you can see here since it does row linking as well.

Simple enough, but oh so powerful. Now I have a flexible, reusable wrapper for DataTables that I can drop into any page and customize further as needed.

So as you’re messing with all the highfalutin doo-dads we have at our disposal in CFML these days, don’t forget about the lowly custom tags because they can add a ton of flexibility and power to your view layer.

 

SiliconDust HDHomeRun PRIME and MoCA

Quick tip if you’re trying to get a networked TV tuner like the SiliconDust HDHomeRun PRIME working on the same jack with a MoCA box — short answer for the impatient among you is you’re going to need a diplexer. You can get something like this one which splits the signal into 5-860MHz and 950-2150MHz ranges, or something like this basic satellite/antenna diplexer from Radio Shack.

Plug the coax from the wall into the input of the diplexer, and send the lower range (which is labeled VHF on the basic diplexers) to the HomeRun and the higher range (which is labeled VHF on the basic diplexers) to the MoCA box and everything should work. Without this, when I was going wall -> MoCA box and MoCA box -> HomeRun, networking worked fine for me but there was no cable signal being sent to the HomeRun.

So longer explanation for those of you who follow my home networking trials and tribulations, with the threat of Moxi going away (though this has been put on hold indefinitely) I decided to investigate other solutions and I wasn’t at all keen on spending $1.7 trillion on TiVos. (OK it’s not really that much but it adds up quick for a three-room setup when you get the lifetime subscription.)

My brother pointed me to a deal on an Acer Revo HTPC that was too good to pass up, so I got that and an HDHomeRun for the tuner. Yes, I know, I’m always talking about how much I hate Windows, I’m an open source bigot, etc. etc. but the reality of the situation is that I’m not the only one in my house that watches TV so I have to at least be semi-congizant of ease of use, and much as I keep saying I’m going to build a MythTV box, every time I investigate things like remote controls, backend vs. frontend boxes, etc. it seems to get complicated rather quickly. If it were just me watching TV I’d go for it, but some people in my house just like to turn on the TV and have it work without any hassle. (Can you imagine?)

The good news on the MythTV front, however, is buying the Revo freed up another desktop PC I was previously using just for PlayOn and PlayLater so I’m going to put MythTV on there, and from what I’ve read it works with the HomeRun! At least that way I can maybe have MythTV in the man cave and have something more easy to use (in theory anyway) in the living room.

Back to the networking piece of this. When I had Verizon FIOS TV I had been using MoCA for networking and it worked great. When I switched from Verizon to Comcast, however, the MoCA didn’t work in the rooms where I needed a cable TV signal and I didn’t do enough research at the time to figure out why. I assumed it was a signal strength issue so I tried some amps but none of them worked.

Instead of digging into that more at the time I moved to powerline networking and while it’s been decent overall, it’s never been as solid and fast as the MoCA was and it wasn’t working very well for the Revo to talk to the HomeRun. The picture would get jerky pretty often or the audio and video would get out of sync, so that wasn’t going to fly. (That said, powerline networking worked quite reliably for streaming HD from the main Moxi unit to Moxi Mates in other rooms, so don’t shy away from it based solely on me switching back!)

This prompted me to switch back to MoCA since that had worked fantastically well for me before, but I again ran into the issue with the cable signal. This time I did my homework and discovered that most people that were having this problem resolved it by using a diplexer, and luckily that fixed things for me too.

Next up is putting more RAM and a bigger hard drive in my other desktop machine and trying out MythTV so when I get a chance to do that I’ll be sure and share how that goes.

It’s Official: Moxi DVR is Dead

I’ve been expecting this news for some time now, but I went to the Moxi web site tonight to be greeted by the following:

The Moxi HD DVR and Moxi Mate are no longer available for purchase. Program guide data and technial support for the Moxi HD DVR will be available until December 31, 2013.

Hell’s bells. Such a nice setup but I knew they wouldn’t be around forever.

My reason for going to the site tonight is because I think the hard drive in mine is starting to die and I was going to poke around to see what’s involved with replacing the drive. I guess I still have nearly two years of life in the thing if I can get the hard drive replaced.

But, this is timely also because maybe it’s the push I need to get off cable anyway. Particularly with the latest announcement from Amazon of their content agreement with Viacom (and more to come, I’m sure), do I need cable? I have Netflix, Hulu Plus, PlayOn and PlayLater, computers galore, Roku … is the DVR as we know it finally irrelevant?

Well Moxi it’s been a pleasure knowing you. If my hard drive holds out or if I can get a new one put in there, I guess I have about 22 months to get this figured out. Clock’s ticking.

Automatically Backing Up Directories From Windows to a Pogoplug

Since this was more confusing than it should be I thought I’d throw this out into the wild in the hopes others in this situation will come across it in their searches.

If you have a Pogoplug (and if you don’t, get one! they’re awesome!) you may find yourself wanting to back up specific directories on a Windows machine to your Pogoplug. Note that on Linux you can of course connect to the Pogoplug and use whatever Linux scripts/tools you want to back stuff up (rsync being my tool of choice), but since Pogoplug does have a native Windows Pogplug Uploader tool you can use that to get this all going pretty easily on Windows.

The only tricks here are figuring out the Pogoplug terminology and then figuring out how to configure things. Nothing against Pogoplug since I know they’re trying to make a buck, but by default everything you do either in the browser-based tool or in the Windows application will drive you towards using Pogoplug’s cloud space as your backup. This is a perfectly valid option, and gives you the added security of having an off-site backup.

For the Windows machine in question, however, I’m already backing it up off-site using Spideroak, so all I really wanted was a way to make sure every file that gets put into the Documents directory (meaning music, photos, etc.) also makes it over to the Pogoplug. This is both as a “local” (meaning in my house) backup as well as so the music and photos can be streamed from other devices.

The first thing you need to know is “backup” in Pogoplug terminology means backing up from the Pogoplug device’s drive to the Pogoplug cloud. To put it another way, you cannot (at least from what I can tell) back things up from your local machine to the Pogoplug device via the “backup” section in the browser-based tool. Also note that if backing up from the Pogoplug to their cloud is something you do want to do, you have to do that through the browser-based tool since backup options are not available in the Pogoplug Uploader that runs on Windows.

What we’re looking to do here in Pogoplug terminology is sync, not backup. Makes perfect sense when you think about it. Sync options are available only in the Windows (and Mac probably)-based tools, not in the browser interface.

Let’s set this up.

First, open the Pogoplug Uploader on Windows. In the top menu you’ll see a “sync” button. Click that.

At the bottom of that screen there’s a + button which lets you add local folders you want to sync. Click the + button and choose a local folder to sync.

You’ll then see that by default the destination directory on the Pogoplug will be the root of the drive attached to your Pogoplug. If you want to sync to another location on the Pogoplug’s drive, simply click the “change” link next to the destination location. This will give you the option to choose another destination on the Pogoplug drive, or you can also sync directly to the Pogoplug cloud.

Hope others find this helpful!

HP Pavilion dv7t

After my rant about Lenovo I figured I should follow up with a brief post about what I bought to replace it, which is an HP Pavilion dv7t. Overall I’m very impressed–it’s a really nice laptop for the money and lightyears better than the Lenovo G770.

If you don’t feel like reading my Lenovo rant, bottom line is the G770 I bought was very crashy and Lenovo wanted another $179 to fix a brand-new computer that hadn’t ever worked well.

Rather than fund Lenovo’s nonsense I decided to spend that additional money on a different computer and I’m very glad I did. Since Costco has an awesome 90-day return policy on computers I ordered the HP (also from Costco–they had a good deal and in addition to the great return policy they extend the manufacturer’s warranty by 2 full years) before returning the Lenovo, so I had the HP and Lenovo side by side while I reinstalled all my software and transferred files.

The HP is built so much better than the Lenovo it’s not even funny. The Lenovo is all really cheap plastic, whereas the HP is mostly metal. The screen on the Lenovo is really crappy (washed out and grainy), while the HP has a nice, crisp, bright screen. The keyboard on the Lenovo felt really cheap, but the HP is very solid to type on. Even the touchpad works much better on the HP; the one on the Lenovo was jumpy and sometimes non-responsive.

To put this in perspective, the Lenovo before tax was $650, and the HP was $799 (the HP was on a $200 off deal at Costco right after the first of the year). And remember that to get the G770 working I would have had to pay Lenovo another $179, and that would have bumped the price of the Lenovo up to about $30 higher than the HP.

To be fair, here’s what the Lenovo had that the HP is missing:

  • Blu-Ray (don’t really care, but I do have an external Blu-Ray drive anyway)
  • Bluetooth (solved with a $20 micro Bluetooth USB adapter)

And here’s what the HP has over the Lenovo:

  • It works without costing me another $179
  • Screen, keyboard, and general build quality are vastly superior
  • i7 processor instead of i5
  • Better graphics card
  • This “beats audio” stuff that’s in the HP actually DOES sound damn good. It’s not all marketing hype. I use external speakers anyway but the built-in stuff sounds pretty amazing for a laptop on its own. There’s a nice little speaker bar above the keyboard and cut-outs for bass on the bottom of the machine.
  • Fingerprint reader (meh, doesn’t matter to me, but hey it’s a feature the Lenovo didn’t have …)
  • It works (bears repeating)

My only complaint with the HP thus far is the fan is pretty loud when it decides to kick into high gear, but I haven’t looked into whether or not that’s a configurable thing. They do have some “cool sense” thing that may need some tweaking. When the fan’s not running the machine is nice and quiet.

So there you have it, for $149 more I got a much, much nicer machine and I don’t have to pay extra to get it working.

I still prefer System76 for my higher-end machines but for a budget laptop I’m quite pleased with this HP, and would never, ever recommend anyone buy anything from Lenovo. Their low-end laptops are pure crap whereas the lower-end HPs, if this one is any indication, are still of very high quality. Inexpensive without being cheap.

Lastly, kudos to Costco for having such a great return policy. Lenovo wouldn’t take the G770 back because I was past the 21 day return policy even though the machine never did work right. Costco lets you return for any reason for 90 days, which saved my bacon because otherwise I would have been stuck with the junky Lenovo. (My only regret is running the thing over with my car and subsequently setting it on fire would have made for a fun and very cathartic YouTube video.)

Good riddance Lenovo. Never again.

Why I’ll Never Buy Another Lenovo Computer

I won’t bore you with all the details since they’re in another post, but despite my overhwelming preference for GNU/Linux for some very specific reasons I wound up needing to get a Windows laptop last month. I didn’t need anything terribly fancy and Costco had a good deal on a Lenovo G770 so I went for it.

Well, ever since I got the thing it’s been bluescreening and/or just shutting itself off periodically, consistently every single night and at other random times during the day.

I don’t think it’s ever crashed while I was actually in the middle of using it, and as I mentioned in my previous post it seemed to be related to power saving activities (e.g. screen dimming, etc.). Since this isn’t my primary machine it wasn’t enough of a nuisance that I dropped everything to figure it out.

I did some searching and troubleshooting as I had time, looked for updated drivers and BIOS, etc. and since all else failed, I figured what the heck, it’s under warranty, let’s call support and see if they have any bright ideas.

I started with the Costco Concierge support that came with the machine, and I was pleasantly surprised. They answered the phone right away, listened to all the troubleshooting I’d done thus far and based on that had a couple of suggestions I hadn’t tried, and overall were quite good.

That didn’t fix the issue however, so they connected me to Lenovo support. Lenovo asked me a bunch of questions to try and eliminate the hardware being the issue (I’m still not convinced it’s not, personally), and they said since it sounded like it was just a power saving driver issue they’d pass me on to software support to get it resolved. The software support queue was very backed up so they said they’d put me in for a callback within an hour.

Several days passed and I hadn’t heard anything (again, not a terribly pressing issue) so I finally called the phone number they gave me and gave them my case number. After 30 minutes of back and forth with the support person (and I gave them a case number, remember) I was told I had dialed hardware support and that I had to talk to software support. (I dialed the only number they gave me, but whatever.) They again told me I’d get a callback but this time in about 15 minutes, so I figured I’d give it an hour and just call back in if they didn’t call.

About an hour later I received a call from software support. This is where stuff gets really fun. I explained the issue again, and the short version of their response is that since this is a software related problem as opposed to a hardware related problem, the software is not covered by the warranty but they’d be happy to fix my problem if I either paid for a single incident support ticket, or upgraded to the premium warranty which does cover software.

The cost for either choice was $179.

So I said to the support tech, “Let me get this straight. I bought a Lenovo computer with your installation of Windows on it and your drivers, it’s never worked right, and you’re telling me that you don’t support your own Windows installation and your own drivers.”

His response was, “Sir, we find that 70% of software problems can be resolved by users themselves so it doesn’t make sense to make people pay more for the computer in order to have that covered since most people don’t need it.”

Trying not to be offended (I’m a 1337 g33k dammit!) I explained to the guy that I was a computer programmer by trade, and that I had spent quite a lot of time trying to solve the problem myself because I absoultely hate calling tech support since they aren’t ever terribly helpful.

I then said, “Look at this from my perspective. I bought this machine. It doesn’t work right. I don’t care if it’s the hardware or the software. I just expect a brand-new machine to work properly. I find it astonishing that you’d sell a computer that YOU configured and if there’s something wrong with the software that YOU pre-install on the computer, that it’s not supported without an additional charge of about 25% of the cost of the machine.”

He just parroted back the “most software problems can be solved by users” line.

I said that’s ridiculous, but fine, I’m wasting my time here so I’d like to return the machine since it’s still under warranty and I don’t like the way Lenovo does business.

He told me Lenovo’s return policy is 21 days, which I was just outside. So basically I got penalized for trying to troubleshoot it myself and not calling in sooner.

I was pretty pissed at this point but I figure ultimately I need the damn thing working if I can’t return it, so I said, “If I pay the $179 you guarantee this thing will work and you’ll keep on it until it does, including sending me a new machine if you can’t fix it?”

Short version of his response was that they guarantee they will do everything they can to fix it and if all else fails, they’ll send me a system restore disk.

Well that’s just dandy. Given that scenario they have no actual incentive to spend any time fixing the problem. Their time is money, but apparently my time is free, so here’s how me paying for a premium warranty would play out. I’d pay the $179, they’d probably spend 2 minutes saying stuff like “have you tried rebooting?”, and then they’d send me a very pricey restore disk and tell me to wipe the computer to put it back to its original state.

I explained to the guy that it didn’t work in its original state, so why on earth would they expect a system restore to fix the problem? Not to mention I already had spent quite a lot of time doing all the Windows updates and installing software.

Since that’s all he could do I told him I’d call Costco or my credit card company since I bet they both have better return policies than Lenovo directly. He said he’d call me back in an hour to see if I still wanted to pay them $179 to fix things.

Small detour here–don’t get me wrong, I understand I bought a “value line” laptop. I’m not expecting a $3000 ThinkPad for 1/3 the price. What I do expect, however, is that the machine will work, and I also expect that a company will fix something they sell me if it’s broken when I buy it.

I then called Costco, and they have a 90 day return policy. So bite me Lenovo, your resellers back your stupid products better than you do yourself. Since Costco had done so right by me through this whole process and was going to take back this Lenogo (see what I did there?) I immediately ordered a new HP dv7t from costco.com. I figure for the $179 I would have paid Lenovo I might as well get a nicer computer instead of adding 25% to the cost of this piece of junk.

Also since I still have a nice window to return the Lenovo to Costco, this way I can get the new computer, transfer all my crap to it from the Lenovo, and then return the Lenovo with plenty of time to spare.

Bottom line here is I’m still quite flabbergasted that Lenovo would sell a computer with their pre-install of Windows, their drivers, etc. and not support a damn bit of it without making people pay extra. I guess they’re just playing the odds but here’s another thought Lenovo: if you seriously only have to help 30% of the people who buy your products with the software that you put on the computers when you ship them, is that really a big deal? You’d rather have people like myself stop buying your products altogether?

Here’s hoping the HP situation turns out much better.

Email Account Zero Achieved

In my “Email is Broken” post I outlined a game plan for getting myself out of the business of managing email. This morning I took a big first step in achieving email account zero, which is not only inbox zero but also means I don’t have any email stored in folders in my email account.

Going back through my old email folders was pretty eye-opening. I had thousands of messages dating back to 2006, so for the most part I took the “if you haven’t looked at it since you can’t remember you don’t need it” approach and deleted large swaths of email with reckless abandon. I at least scanned each folder to see if anything jumped out at me but (surprise!) it was all outdated crap.

I did have email folders for “Accounts,” “Servers,” and “Serial Numbers” that took a bit more attention since they had information in them I didn’t want to lose. For those folders I looked at each email and moved the information to the following places:

  • If it was something like a server name, IP address, etc. I made sure it was in our server wiki on FogBugz
  • If it was login information for an application, I put that information in KeepassX
  • If it was a software serial number for server software (meaning not a personal serial number just for me) I made sure it was in our serial number wiki in FogBugz
  • If it was a software serial number for an individual license for me personally, I put it in a Tomboy note for serial numbers
  • If it was an email containing at attachment I wanted to save, I saved it to my local hard drive in a logically named directory
  • If there was email in my inbox that was only sitting there as a “to do” reminder, I put it in a “to do” note in Tomboy

Also I have one particular application that sends out a lot of alerts that contain important information, but that typically someone other than me has to do something about, so I took myself off that distribution list. The info is all saved in log files anyway so I can always trace it back that way if there is something I need to look at.

And with that I have zero email in my account, I’m much better organized, and I’m not relying on email as a filing cabinet. Email wasn’t designed for that so it’s no wonder it sucks at it.

Much more ahead but this big step eliminates a lot of what Scott Hanselman calls “psychic weight” I was feeling related to my email.

Next up is adventures with StatusNet and stopping treating email as IM.

Email is a Broken Communication Medium. Here’s What I’m Doing About It.

I had many, many, many thoughts today on why email is a completely broken communication medium in the workplace, both in terms of its effectiveness as a communication tool and in terms of the horrendous unproductive time suck it can so easily become.

For example, when there are issues with production systems high-speed, high-volume email threads between co-workers should not be the primary means of communicating. Generating a horrendous amount of noise and distraction in a crisis is pretty much the last thing you want to do, and certainly reading and replying to emails about a problem should not make up 90% of the time spent solving the problem.

Furthermore, in a lot of organizations email is seen as something it’s not, namely a synchronous, instantaneous medium. Email is not IM. Email isn’t a ringing phone that someone has to answer right away. We need to stop thinking of it that way.

Sure, email is delivered more or less instantly, but the habit that I (and I’m sure others) fall into, particularly if you get an alert sound or some other indication every time you receive an email, is checking the old inbox as a Pavlovian reflex.

Rather than merely flapping my gums with an “email sucks” rant, I’m finally annoyed enough with the problem and convinced about how bad it is on numerous levels (some companies think it’s so bad they’re banning email entirely) that I’m going to do something about it. Doing something–anything at all–is better than doing nothing, so here’s what I came up with so far.

First, I installed StatusNet on a server so my co-workers and I can pilot it. To me that’s pretty much “problem solved” for a lot of issues but I’m sure some folks will take some convincing. That’s understandable, but I’m remaining optimistic for the moment. (My colleagues and I have about a gazillion ideas on just how counter-productive and horrible email is as well as how StatusNet will be used and some of the problems it’ll solve, so I should have more to say about that as we put our lofty theories into practice.)

Second, I am adopting the following rules for my work email:

  1. Inbox Zero. No almosts. No exceptions. Zero means zero. Included in this is not using my inbox as a to-do list, something of which I am horribly guilty.
  2. No folders or saved email. No exceptions. If something in an email is important enough to save, it must be saved somewhere appropriate for the specific type of information, be that a wiki page, Keepass, a good old-fashioned browser bookmark, or whatever. My email account will no longer be used as a filing cabinet.
  3. I will not check my email every time I hear my phone buzz. I will add a hopefully not-too-offensive note to my email signature explaining that if something is urgent and requires a reply any sooner than 1 hour minmum, people should contact me via IM or phone. (There may be some issues with the defintion of “urgent” during the adjustment period but I think this is a good starting place.)

Throw some Pomodoro Technique into the mix and I’ll probably be so productive my hair will catch on fire.

I’d be interested in hearing your experiences with this. How do you treat and handle email? Have you tried to break the email habit? If so, what worked and what didn’t?

When I look back at a day like today and the amount of actual work accomplished is such a small percentage of the 14+ hours I spent furiously working on issues, that’s a clear indication something is drastically wrong. Even if some of what I outline above fails it’s better than the status quo, because the status quo is failing miserably.

Solution for Windows 7 “The Computer Has Rebooted From a Bugcheck” Error

I’m not proud of it but over Christmas I bought a Lenovo G770 laptop running Windows 7 (Home Premium 64-bit specifically). It’s a really nice machine for the money, particularly given the deal I got on it through Costco.

The main reason for me getting a Windows machine–OK, call it an excuse if you will–is because I needed a machine to run Pro Tools and Adobe Audition for both voiceover work as well as some volunteer production work I hope to be doing for KBCS before too long. Yes, unfortunately when you get into needing to run specific software like that there’s some things that unfortunately don’t run on Linux. This also opens the door to doing some live shows on CodeBass this year using SAM Broadcaster or maybe just take some of the load off Vicky by uploading my own damn show for a change!

Anyway, lately the machine’s been shutting down when I’m not looking, so I took the time to dig into the Event Viewer today and saw the error “The computer has rebooted from a bugcheck” along with the location of a memory dump file. Also occasionally when the screen shuts off I’ll hear the fan go nuts and it won’t respond to keyboard or mouse input, requiring a hard reboot to get it to come back.

I was concerned maybe it’s a hardware issue of some sort but I found a solution that seems to be working thus far, though the real test will be if it makes it through the night tonight without dying. I found a lot of posts about this that were unrelated to anything that would be happening on my machine (many of which basically said “uninstall Zone Alarm” which I don’t have installed), but one I came across sounded plausible since the error seems to occur when it’s doing something related to power saving.

This isn’t the exact error I was getting but what’s described here has thus far done the trick since it tends to happen at least once during the day as well:

  1. Run a command prompt as administrator
  2. Type “powercfg -h on” without the quotes and hit enter

I then rebooted just to be safe (it’s Windows, after all).

What that does is manually re-enable hibernate mode, and apparently even if you don’t use it (which I don’t) it fixes the issue.

As I said the real test will be if it makes it through the night without dying since it always happens overnight, but I’m cautiously optimistic.

Windows 7 and problems like this sure make me appreciate Linux.