Argh! I’m pen-less!

Photocredit: Tony Hall. Click photo to visit Tony's photostream @ flickr.com

Pen-less. It’s 9:30 in the evening, and I need to write out some thoughts (about a split-complementary color set).

At work last Friday, the pen that I’ve had with me for some months now finally gave up its last ink. It was a Pilot Precise V5, black.

My habit has been to have that pen in my left front pants pocket, reliably at hand. I guarded it, making sure to have it back if I let a colleague or a daughter use it for a moment. I gave other pens like it away, but kept that one.

Of course I have other pens. Bic ball-point pens: the kind you get in bulk in the plastic bags during back-to-school sales. I hate those pens. They fail so often, and you have to drag the ink out of them, scraping across paper. Scribble in circles first just to get them warmed up. Lazy bastards. Then you have to draw across your strokes again, filling in ink on the empty indentations of your first pass at writing.

I’m irritated at myself for getting into this pen-less position. Luckily, I have Plan B: pencils and a sharpener.

Nephtali web framework creator talks FP

Nephtali project website screenshotAdam Richardson of Envision Internet Consulting has been a long-time collaborator and good friend of mine, and over the last few years I’ve seen him pursue knowledge in web programming with persistence that I’ve never seen from anyone else.

One of Adam’s projects is Nephtali: a web framework that focuses on security and considers the usability of the framework itself. Adam has labored over details in his latest version of Nephtali that will make life better for developers. For instance, he planned the naming convention and namespaces for functions so that in an IDE like NetBeans, the functions appear grouped logically in an easy-to-access format.

Nephtali is up to version 3.0.5 at the time of this writing, and the earlier versions were completely Object Oriented PHP. In version 3, Adam re-thought Nephtali away from the OOP base and rewrote it utilizing FP, Functional Programming.

For the last month or so, Adam has been lobbying various hosts to upgrade to PHP 5.3 or higher, because Nephtali requires at least that version. It is right on the cutting edge. I asked Adam a few questions about Nephtali, and that dialogue follows.

Davin: Nephtali requires the latest version of PHP, version 5.3 or higher, but many hosting providers don’t provide that yet. What about PHP 5.3 is worth waiting for?

Adam: PHP 5.3 includes many enhancements and bug fixes, but the features that facilitated Nephtali’s general approach and architecture were support for namespaces and the new Functional Programming (FP) capabilities.

Davin: I’m familiar with object oriented programming, but you’re talking about “functional programming.” Can you summarize the difference, and explain why you decided to go with FP instead of OOP with Nephtali?

Adam: Most programming languages offer the ability to define functions, however that doesn’t necessarily make them functional programming languages.  It’s easy to to get into flame wars over what a “true” functional language is, but I’ll lay out some general principles:

  • Functions can be passed around just like other datatypes.
  • Closures allow variables that are in scope when a function is declared to be accessed and carried around within the function.
  • Side effects (changing the value of a variable within a function) are limited.
  • Many FP languages natively support currying (the ability to define a subset of a functions arguments and then allow other functions to finish defining the others.)

PHP now supports the first 2, and with some discipline, you can limit the impact of side-effects within your code (there are even some clever hacks for the currying issue.)  But the big question is, “What does this buy you?”

Simplicity.

Object Oriented Programming (OOP) bundles variables with the functions (methods) that directly interact with the variables.  This does provide a degree of encapsulation, as the accessor methods make sure that instance and class variables contain what is expected.  However, the issue often isn’t “What” a variable is changed to, but rather  “When” a variable is changed.  This problem of “When” is most glaring for OOP developers when implementing parallel processing, an issue that has produced many complex, clunky answers.

Taking an FP approach simplifies the question of “When”, as you move from a paradigm of altering variables to one of acting on values returned from functions.  Relatively speaking when following general FP conventions, writing unit tests is simple, writing parallel processing apps is simple (see Scala, Clojure, Erlang, etc.), and as it turns out, writing a capable web framework is simple, too.

Davin: What about models? So many of us in the web field have become familiar with the MVC (model, view, controller) architecture in frameworks, and it seems like Nephtali doesn’t use the models concept at all. Is that right, and if so, what do you do about databases?

Adam: Simplicity.

In terms of DB interaction, I like PHP’s PDO capabilities and security.  Performing simple DB work is easy in Nephtali, as you can generate code very quickly using the Nedit, the online code generator for Nephtali.  Nephtali provides some simple enhancements (functions that automatically table inserts, updates, and deletes; easy connection management; etc.), but you’re always working close enough to the basic PDO capabilities that it’s still very easy to perform transactions, connect to multiple DB’s, work with existing tables that don’t follow particular naming conventions, and whatever else your unique environment may entail.  One line of code is all it takes to grab a set of rows from a DB.

Second, utilizing the parallel processing capabilities of CURL, Nephtali provides some special capabilities for web requests.  A couple lines of code can retrieve a web request (in parallel with any other web requests) and format the retrieved data into whatever container (object or array) you’d like.

Davin: I saw the post on the Nephtali blog about Nephtali’s parallel processing for web requests. Can you explain when that would be useful, and when I should not run ahead and parallel process everything?

Adam: If you have a page that only makes use of one web service, you don’t gain anything.  However, if you have a page like Nephtali’s homepage, which makes a request to Google Code for the latest download and also makes a request to the WordPress blog for recent entries, you can gain a significant performance improvement by processing those requests in parallel.  Instead of ending up with serial calls to the two services (GoogleCodeRequestTime + WordPressRequestTime), the parallel request now equals the greater of the two requests (GoogleCodeRequestTime -OR- WordPressRequestTime.)

Nephtali handles the processing for you automatically.  Always use the request() and response() functions, and Nephtali will make things faster when they can be faster.  That’s it.

More about Nephtali

Learn more about Nephtali at nephtaliproject.com. When you’re there, check out the screencasts on using Nephtali. One of the great features on that site is NEdit, a tool that you can use to write up a lot of the code you’ll need for Nephtali pages.

Oh, and don’t hesitate to use the contact form. Adam loves talking with people about Nephtali, and I’m sure he’ll happily answer questions or respond to comments about the framework.

How WordPress falters as a CMS: Multiple content fields

WordPress is amazing and keeps getting better, but I want to be clear about an inherent limitation that WordPress has as a content management system (CMS). That limitation is that WordPress doesn’t handle multiple content regions on web pages.

Too strong? With WordPress, you can try to use custom fields or innovative hacks like Bill Erickson’s approach to multiple content areas using H4 elements in his excellent theme “Thesis”. Unfortunately, neither of those approaches really deals with the depth of the design problem that often requires multiple content areas for pages.

As an information architect/user experience designer, I’ve been involved in many projects that required more types of content on any single screen than WordPress is designed to handle.

Let me draw out what I’m talking about here.

Exhibit A: Page content that WordPress is designed to handle

In a standard WordPress page or post, you’ll see these author-controlled pieces of content.

  • Post/page Title
  • Body
  • Excerpt (often not-used)
Standard WordPress content fields include the title, excerpt, and body.
Standard WordPress content fields include the title, excerpt, and body.

There are other sets of data for a page or post that an author can control, too, but these are meta-data such as tags, categories, slug (shows up in the URL), and possibly search engine optimization information like title, description, and keywords.

For a normal blog, many online trade journals, and a lot of basic websites, this really covers the bases. The body contains the bulk of the content including images, video, and audio that can be intermingled with the text itself. This model is very flexible, and it has definitely proven itself.

Exhibit B: Page content that pushes WordPress too far

In 2009, there was a small project at work to develop the website Covenant Musicians, and because the person who would keep the site updated was already using WordPress, we made the decision to build this site with WordPress too.

Well, if you look at one of the destination pages for this site, the musician profile page (here’s one for example), you’ll notice some different pieces of content which may or may not be present on any particular musician profile page. When they are present, they need to be in certain places and sometimes with certain content.

This custom WordPress page uses fields in addition to the standard options: Musician Image, URL, and Video.
This custom WordPress page uses fields in addition to the standard options: Musician Image, URL, and Video.

The problem is, to control those extra pieces of content: the video, the band image, the link to the band’s website, the site owner needs to use WordPress’s custom fields in very precise ways, without the benefit of WordPress’s content editing tools. What a drag!

To make life easier for the site owner, we ended up recording screencast instructions on how to use these fields and delivered those help files with the site itself. (We used Jing by Techsmith, by the way.)

It would’ve been better had the interface been clear enough so that we didn’t feel the need to document the process of updating these destination pages, but that’s the trouble with stretching WordPress beyond its default content fields.

Ask too much of WordPress and ease-of-use is the casualty

Do you see the difference? When an effective design solution requires multiple types of content per page, using WordPress will actually make your website difficult to manage. WordPress is usually so easy to use that when you hit this wall, it is very apparent.

When you’re at that point, WordPress is probably not the right CMS to choose.

Should WordPress improve in this area?

Whether through the core application or through an excellent plug-in (is there one already that I missed?), if WordPress is going to grow in the content management systems field, this shortfall will need to be addressed.

However, WordPress is really excellent at what it does already, and the better course might be to decide to keep the features in check and let other systems compete in the mid-to-enterprise scale CMS arena. Scope creep never stops, and a good application strategy knows when to say “no.”

Am I wrong?

Am I off-base here? This is just one aspect of WordPress that should limit its use. Another that should cause designers to think twice is when dealing with faceted-navigation which requires more than one dimension (tags can probably handle one dimension). But, again, those are more complex design requirements.

I’m not a WordPress consultant, and I’ll bet some of you would like to point to the errors in my thinking. Let’s hear it.

Experience theme for Covenant Eyes

Cindy Chastain’s article, “Experience Themes,” at Boxes and Arrows outlines a neat way to package the concepts that help user experience designers put creative work into context.

When I was leading many design/development projects at a time, I’d write a creative brief for each—it helped me and the team stay clearheaded about each project. An experience theme seems like an alternative to a creative brief.

The following thoughts apply Chastain’s article to my work at Covenant Eyes.

Covenant Eyes is rich with stories

At Covenant Eyes, Inc., we have a full-time blogger, Luke. As I see it, Luke’s job is to draw out the stories surrounding Covenant Eyes and to share them using the Internet. He’s our storyteller.

What are the roles? There are so many stories, from people in so many places in life.

  • husbands, fathers
  • wives, mothers
  • children
  • pastors, rabbis
  • counselors
  • porn addicts, recovering porn addicts, people who have beaten the addiction
  • and the list continues

What are some theme concepts?

  • For people fighting a problem with pornography: Learn to be honest again (These words come from Michael Leahy’s mouth while he was visiting our offices.)
  • For mothers with children who use the Internet: Protect my family
  • For fathers with a teenage son: Teach him to be responsible for his actions

Experience transcends our services

What work do we do at our company? Although others I work with may claim we deliver software, I think we deliver information. Our software allows us to provide information-rich reports on Internet usage that can be used within relationships. I think of these as “accountability relationships.”

The theme concepts listed above have little to do with software or even our service. The real value we provide is that we can provide the sense for people that what could be their little secret is not actually hidden. That little bit of knowledge has proven its ability to change lives, and relationships, for the better.

The hard part is carrying the experience theme across our touch points with users

I recently helped put together a spreadsheet to inventory the automated emails we send to users at various points. There were over 60 emails, and they fulfill needs ranging from billing concerns to helpful reminders after a few weeks of being a customer. Many of these messages should be revised, and keeping the theme in mind will help create a coherent experience for our users.

Covenant Eyes has multiple touch points with its users.
Covenant Eyes has multiple touch points with its users.

Beyond these emails is a myriad of other touch points:

  • sign up form
  • help documents
  • filter settings controls
  • accountability reports
  • tech support phone calls
  • blog posts
  • and so on

Taken all together, these communications can benefit from an experience theme.

I suspect the key to pulling this off is to have all those involved with crafting these touch points understand the experience theme and leave it to them to carry it through. As the company’s user experience lead, my job may be to facilitate the definition and adoption of an experience theme, and motivate and lead by example so others will carry the vision.

Seams between systems and the Vignelli NYC subway map

I just read “Mr. Vignelli’s Map” by Michael Bierut over at Design Observer. In the post, Bierut remembers and analyzes why the public rejected Vignelli’s map of the New York City subway system. (Here’s the Vignelli subway map.)

The Vignelli map smartly acknowledged that for passengers of the subway focused on navigating the subway system itself, above ground geography was nothing but a factor of added complexity. So the map instead was oriented around the subway lines and stops themselves, abstracting actual geography. This was a keen simplification from an information design perspective.

But consider this observation from Bierut’s article.

To make the map work graphically meant that a few geographic liberties had to be taken. What about, for instance, the fact that the Vignelli map represented Central Park as a square, when in fact it is three times as long as it is wide? If you’re underground, of course, it doesn’t matter: there simply aren’t as many stops along Central Park as there are in midtown, so it requires less map space. But what if, for whatever reason, you wanted to get out at 59th Street and take a walk on a crisp fall evening? Imagine your surprise when you found yourself hiking for hours on a route that looked like it would take minutes on Vignelli’s map.

The concept of designing the seams between systems has become apparent within the user experience design community over the last couple years. This is an example of that problem of seams.

Passengers of the subway system are also navigators of the city itself, so their context of use spans beyond the subway and the end of their decisions are not merely which stop to get on and off of, but where they are going once they get out of the subway.

Bierut makes the point:

The problem, of course, was that Vignelli’s logical system came into conflict with another, equally logical system: the 1811 Commissioners’ Plan for Manhattan.

How can designers consider the seams between the subway system and the city plan to result in a better-designed subway map?

NYC, of course, has a functioning subway map. Is functionality the only litmus test?

(I’ve taken the subway in New York City only once, and managed to get from Point A to Point B successfully, although with some anxiety.)

The Thanksgiving Duck

Lila and Eva wishing you peace on Thanksgiving, 2009.
Lila and Eva wishing you peace on Thanksgiving, 2009.

As mentioned last post, I tried a duck for Thanksgiving. Lila summed it up with “It’s okay Dad, but it’s not appealing.”

I could not fit the bird into the crock pot, so my Plan A was foiled. Instead I roasted it in the oven. I applied poultry seasoning and tucked onion and apple chunks inside before putting it into the oven.

What about the fat? The infamous problem with duck is the layer of fat under the skin of the duck. I poked holes in the skin so the fat would drain out during roasting. This certainly helped and the skin was actually very nice, golden and crispy. There were still some unappealing sections of fat, although they were easy to separate from the meat.

I’ve never had duck before, and the taste and texture was unexpected. It wasn’t bad, and the overall dinner was great.

A recipe for disaster?

Against the advice of Adam, I am going to attempt to cook a small turkey in my crock pot for Thanksgiving.

It’s just me, Lila, and Eva, so we don’t need a big bird.

If I can’t get it to fit in the crock pot I reserve the right to abort to Plan B, which is to put the bird in the regular old oven. But that isn’t as interesting.

On a side note, I’ll bet the frozen chickens feel like rejects this time of year. Poor little birds.

Happy Thanksgiving everyone!

Update, 10:47 PM

The smallest turkey at the store was 10 pounds! That’s four more pounds than I dare to try to fit into the crock pot. So, while I nearly decided to find the biggest crock pot ever, I decided instead to get a 5 pound duck.

Oh yes, the game’s afoot now. Plus, the bill was less that than of a 10 pound turkey. 😉

WUD 2009 at MSU recap

Yesterday’s World Usability Day event at Michigan State University was good—but a little odd.

The morning sessions were spot-on, and some of the afternoon talks were good as well. However, it was clear that some panelists didn’t understand their audience of usability and accessibility practitioners. Their talks were still interesting, but they didn’t understand the user experience industry’s take on words like “accessibility” and “sustainability,” which was this year’s theme.

So, here’s a quick recap.

Assistive Technology Expo

I attended the Assistive Technology Expo in the morning. I posted yesterday about comments regarding CAPTCHAs gleaned from that talk.

The two presenters work in the technology field providing technology support for people with various disabilities and are themselves blind. They demonstrated how they use screen readers to accomplish various tasks online, like checking the weather, tuning into a football game streamed online, checking stocks, buying groceries, and buying a computer.

I appreciate observing and listening to people with disabilities who use the Internet, because it helps counter what I know about the technology with what is clear about people. That is, people adapt and make things work to the best of their ability. These two presenters were gracious about technology-related problems that I know many sighted people would be upset with. They also pointed out that most websites are at some level usable by them, but of course they prefer ones that are more accessible. We did see a number of examples where they simply wouldn’t have been able to overcome some technical roadblocks without significant additional effort.

One part of the presentation included them showcasing how they use an iPhone. An accessibility feature on the iPhone causes a single tap on the touch screen to say the name of the application (or letter if it is the keypad), while the double-tap will activate it. So, they have audible feedback to find the function they need, plus the capability to then activate it. This seemed to work very well for them.

Another point made during the session is that these assistive technologies like screen readers and electronic braille devices are quite expensive. Some screen reader programs are more expensive than the cost of the computer itself. However, the presenters voiced hope because the prices are coming down. They cited Apple shipping Macs that have built-in accessibility features at zero additional cost. Also, for Windows, there are some screen reader programs that are only a few hundred dollars.

Special Session: Contemporary Issues of IT in the Sustainable Global Knowledge Economy

This panel session had presenters on the topics of:

  • delivering broadband across the state of Michigan even to rural areas (George Boersma)
  • ITEC, a center in Lansing that provides after-school programs to help youth learn about technology, science and math (Kirk Riley)
  • IT accessibility (Sharron Rush)
  • global knowledge economy (Mark Wilson)

All the presenters were well-spoken and interesting. Sharron Rush seemed to be the one presenter that is part of the usability and accessibility profession, though the others shared important information and perspectives.

Unfortunately, I don’t have the time to provide more details on these presentations.

Hybrid Technology for a Sustainable Future

Shane Shulze of Ford Motor Company presented information on what Ford has been working on in regard to battery powered cars. His talk was focused on battery technology, and it was interesting to see the audience’s response.

One participant spoke up and asked about how these new cars will address the safety issues with quiet-running cars. Shane’s answer was that Ford is aware of the issue. I suppose we can look to future prototypes to see how what they do with this issue. (From a UX perspective, I think that is a really interesting question: what are the design concerns in regards to the volume and appropriateness of the audio.)

e-Government Services for a Sustainable County

Salina Washington of Oakland County and Constantinos Coursaris of Michigan State University presented on how Oakland County has transformed their delivery of services to citizens of Oakland County with the eGov department of the county government.

This presentation was inspiring. We know that good, usable technology can improve service delivery and decrease costs, but this was an actual example of that happening.

The take-away from this was that when faced with a challenge, like a massive cut in budget, instead of going the traditional route of laying people off, think creatively and as a group come up with ideas on decreasing costs and making the most of the resources that each part of the government agency uses.

Sustainability and Agility: UX Designs for Eforms

John Rivard spoke about integrating UX and Agile development at a bank. He shared examples of their workflow, like work-ahead, follow-behind. This was also an excellent presentation and it seems that the way John is working is similar to how we operate at Covenant Eyes.

That’s all folks

All-in-all, it was a good day with some unexpected, but enjoyable talks. Good job to the organizers from the MSU Usability & Accessibility Center! Also, check out Tom Schult’z posts on his blog.

WUD: captcha problems discussed in assistive tech expo

Tom Schultz and I are at the World Usability Day event hosted by Michigan State University today. We sat in a session this morning that focused on a demonstration and discussion of assistive technologies.

An interesting point in the discussion was that problems with CAPTCHAs for people with visual  impairments. One of the presenters went through a process at the DELL website, selected a computer and went to purchase it, but on the way to checking out, he had to pass a CAPTCHA that asked him to enter the characters he sees in the image into a text box.

Of course the problem was that he could not see the image and there was no alternative available. No sale.

Someone else brought up Google’s use of audio as an alternative to the visual CAPTCHA, but the presenters pointed out that for someone who has both visual and hearing impairments, this is still insufficient.

(You can try the audio CAPTCHA on the first page of the sign up page for Blogger. Try it out!)

They pointed out that a CAPTCHA that used reasoning could be a more accessible approach, and another idea was to send an email to verify that the agent is, in fact, a human (that’s the point of a CAPTCHA).

I’ll probably post another update from this conference later.

Paper: crucial to Web design

At first thought, Web design is a digital job. But as long as I have done this work, I’ve had paper on hand.

In the 90s I’d quickly sketch different ideas for overall design, narrow it in, and then sketch out the plan to create the layout with tables, complete with pixel dimensions for each cell and notations on margins, borders, and padding. I’d annotate the sketch with hexidecimal codes for colors to use. The process placed ink before pixels.

As CSS gained ground and the industry left table-based layouts behind, I sketched fewer details, but usually still rapidly drew thumbnails of page layouts on paper before settling in.

For a time, I thought I could do most of this work with computer programs as my primary tools: Word, Excel, Photoshop, Fireworks, Flash, Dreamweaver, and straight textual coding tools like BBEdit. Later, OmniGraffle joined the toolbox, and I did first-round design digitally.

Ink before pixels again

notebook-sitediagramOver the last six years paper and ink has again become my first tool. Hand-drawn sketches and notes are fast and fluid—far moreso than code or Photoshop.

With a quick sketch in hand, the coding can leapfrog some easy-to-make first mistakes. For instance, last week I needed to create some screens for a 3 page sign up process. I spent about 30 seconds drafting two quick page layouts on paper before I jumped into Photoshop and Dreamweaver to create the graphics and code it up.

By doing the second sketch, I was able to make better use of a design grid and utilize white space more effectively. That’s 30 seconds well-spent, and it means I didn’t have to waste time in Photoshop or with code on a design that had whitespace problems.

Good paper is worth it

When I started my latest job, I asked for some paper to sketch with. I was provided with some cheap cardboard-backed white notepads. Each pad fell apart within a week or two of use, and was better suited to ripping sheets off then holding together. Irritating!

I started to use my own notebooks for work, and just a couple weeks ago purchased a set of Moleskine Volant notebooks. They are softcover notebooks about 5 by 8 1/4 inches, and are well-bound with excellent ruled paper. I think they’re the best notebooks I’ve ever had.