Categories
User experience, web, technology

Paper: crucial to Web design

At first thought, Web design is a digital job. But as long as I have done this work, I’ve had paper on hand.

In the 90s I’d quickly sketch different ideas for overall design, narrow it in, and then sketch out the plan to create the layout with tables, complete with pixel dimensions for each cell and notations on margins, borders, and padding. I’d annotate the sketch with hexidecimal codes for colors to use. The process placed ink before pixels.

As CSS gained ground and the industry left table-based layouts behind, I sketched fewer details, but usually still rapidly drew thumbnails of page layouts on paper before settling in.

For a time, I thought I could do most of this work with computer programs as my primary tools: Word, Excel, Photoshop, Fireworks, Flash, Dreamweaver, and straight textual coding tools like BBEdit. Later, OmniGraffle joined the toolbox, and I did first-round design digitally.

Ink before pixels again

notebook-sitediagramOver the last six years paper and ink has again become my first tool. Hand-drawn sketches and notes are fast and fluid—far moreso than code or Photoshop.

With a quick sketch in hand, the coding can leapfrog some easy-to-make first mistakes. For instance, last week I needed to create some screens for a 3 page sign up process. I spent about 30 seconds drafting two quick page layouts on paper before I jumped into Photoshop and Dreamweaver to create the graphics and code it up.

By doing the second sketch, I was able to make better use of a design grid and utilize white space more effectively. That’s 30 seconds well-spent, and it means I didn’t have to waste time in Photoshop or with code on a design that had whitespace problems.

Good paper is worth it

When I started my latest job, I asked for some paper to sketch with. I was provided with some cheap cardboard-backed white notepads. Each pad fell apart within a week or two of use, and was better suited to ripping sheets off then holding together. Irritating!

I started to use my own notebooks for work, and just a couple weeks ago purchased a set of Moleskine Volant notebooks. They are softcover notebooks about 5 by 8 1/4 inches, and are well-bound with excellent ruled paper. I think they’re the best notebooks I’ve ever had.

Categories
User experience, web, technology

XSL to get text from Apple Pages documents

Pages is the name of Apple’s basic word processor program that comes with their iWork suite of applications. It’s not a bad program, but a number of months ago I needed to switch up to MS Word for the Mac.

Well, this morning I was looking through some old files and found a text document I wanted to print that I had done using Pages. Unfortunately, I had removed iWork from my Mac, so I no longer had the software to open the Pages document.

After a cursory search on the Internet for a program that would let me open Pages docs without having the program itself, I came up empty-handed.

So, I inspected the Pages document and realized it was a package. (Right click on the document icon and Show Package Contents.) The package contained an index.xml.gz file, which I unzipped and found within the body of my document amidst a whole bunch of XML code.

I momentarily considered reconstructing the text in TextWrangler, but thought it might be fun to write an XSLT file to do the work.

Please note that this is a 1st draft meant to retrieve the text from my document. It will not handle anything fancy, just text. Plus, it will only try to make each chunk of text into a plain-text paragraph in HTML, suitable for copying and pasting out of a browser window. Use at your own risk. 🙂

Ok, here’s the textFromPages.xsl file.

Others may take this initial XSL file and do what they will with it. I hope that if you take this and make it better, you’ll comment on this post to let me (and others) know.

To have it be useful to you, you’ll need to know how to apply an XSL transformation to a source XML file (specifically the index.xml from Pages).

Hint: Firefox will do the transformation for you if you include the proper xml-stylesheet directive right after the XML prologue in the source XML file. It looks like this: <?xml-stylesheet href="textFromPages.xsl" type="text/xsl" ?>

Categories
User experience, web, technology

Bing delivers surprising amount of traffic to rangelistings.com

One of my hobby sites is rangelistings.com, a site with the goal of providing a map of each state with the locations of shooting ranges on it. I keep an eye on the web traffic pretty regularly, and about 90% of the traffic it receives is from search engines.

Up till the last couple of weeks search engine generated traffic to the site has been 80 to 90 percent from Google’s search. Over the last month, overall traffic has increased from around 60 visits per day to around 90 per day. Where does it come from? Well, still search engines primarily, and traffic from Google has increased noticeably during this time.

However, to my surprise, Bing is also making a surprisingly strong showing. Click on the chart below to view the details.

(Click on the image to view a larger version.) Traffic to rangelistings.com from the search engine Bing is suddenly showing at almost 30%.
(Click on the image to view a larger version.) Chart of traffic sources for rangelistings.com from Oct 1 to Oct 16, 2009. The search engine Bing is suddenly showing at almost 30%.
Categories
User experience, web, technology

Many stories in user experience

I just watched this TED talk, “Chimamanda Adichie: The danger of a single story.”

Please, take the 18 minutes to watch it, then continue to read.

Watching this talk brought to mind two thoughts related to user experience work.

First, in a recent edition of Interactions magazine there is an article “Stories that inspire action” by Gary Hirsch and Brad Robertson, that has planted the desire in me to uncover the stories of the company I work for, Covenant Eyes. There are so many ideas we have of ourselves, set by the expectations of management, employees, and so forth. But there are also stories of our customers, and by telling many of these stories, I suspect we will hear some stark contrasts that will cause us to reckon with ourselves.

Have we stereotyped our corporate self?

The second thought is in regards to personas. At Covenant Eyes, my colleague Jackie has taken the lead on creating a set of personas that we can use during our design and development work. This is a first for us. This week as we were reviewing the current set of about 16 personas, we were working on writing in various scenarios for each persona. I think the point of each scenario is to enrich the story of that persona.

But perhaps more important is that across the full set of personas, however large it may get, that we have properly balanced the stories that are represented by each persona. I think, at its root, that is part of why personas are valuable in the first place. To challenge the stereotype, the single story, that we might have in development about our “user.” These personas will be valuable if they can help us tell the many stories of our customers and users.

Categories
User experience, web, technology

A Sad Tale of Pagination

I imagine some professional chefs are accused of over-analyzing a bowl of soup now and then. Like that, as a user experience designer, I get caught up in little pieces of user interface on a regular basis.

This particular story concerns a navigation system that utilizes pagination in what at first seems an obvious choice, but upon observation it is clear that this is a very poor approach.

Background: Company setting

Covenant Eyes, Inc., is an  8 year old software company in Michigan with about 50 employees. About a dozen are customer service representatives, some for enterprise customers and some for individual or family accounts. There are about 10 in the IT team, which includes myself.

Background: What service does our company provide? Internet accountability.

Take 2 actors, George and his friend Paul. George is addicted to online porn, but he really wants to beat his addiction because he feels it is wrong and could really mess up his life. To attack his problem, George installs our software on his computer. The software keeps tabs on George’s activity, and once a week sends a report of that activity over to Paul. Paul can then talk with George about George’s Internet activity. It seems simple, but removing the anonymity of his addiction is powerful.

The point, in a nutshell, is accountability. If George is trying to kick some bad online habits, his friend Paul now has information in these reports that he can use to hold George accountable.

The current design calls for pagination

These Accountability Reports are like executive summaries that include links over to what we call the “Detailed Logs.” This log is a full list of URLs that George visited.

Depending on the amount of activity, the log may have thousands of entries for Paul to navigate.

When these logs first became available, customers’ download speeds were more of an issue than they are today, so the developers knew that they could not simply put all the entries on a single page because the pages would take far too long to load.

Pagination to the rescue! The developers broke up the long list of URLs into pages, each page having 50 URLs. To help Paul navigate this long series of pages, numbered page links and “Previous” and “Next” links were placed at the top and bottom of each page.

So, let’s say Paul is looking at page 50. He would see something like the pagination navigation shown in Figure 1.

Figure 1: Pagination
Figure 1: Pagination

This seems a good approach on two fronts.

  1. Paul won’t wait to download one page with over 8,000 URLs on it, but if we divide that time into, in this case, 165 separate downloads, each page will seem pretty quick.
  2. Pagination will work for Paul because he uses pagination on nearly every search engine results page. It’s nothing new to him.

Bingo. Problem solved. Right?

But why does it take so many clicks to find the right info?

I was standing next to Mike, one of our Customer Service Representatives, and asked him a seemingly simple question. “Mike, can you bring up that log and show me what was going on last Tuesday at 11:32 AM?”

I did not intend it to be a usability test, but it might as well have been. Mike helps people every day by walking them through reports and logs, so he is as expert as anyone gets at navigating these logs. Yet, the basic task of finding a page with a specific time on it was accomplished by a series of guesses, each slightly more informed than the previous guess. It took 8 tries before Mike got us to the right page.

Since then, I have seen people repeatedly click the “Next” button, flipping through each page to find the one page they want. With 165 or so pages in a log, this can take far more than 8 clicks.

If someone knows the date and time they want to view in a Detailed Log, shouldn’t they be able to get to that page without guessing on the first try?

20/20 hindsight: Why is it so hard to find the right page?

Pagination is a valid interface design pattern, and is perhaps most often seen on search engine result pages. Still, it does not work well here.

So, why doesn’t pagination work here? Thinking in information architecture terms can help answer the question.

Pagination is a metaphor from the print world

We’ve all grown up reading books and magazines, and so page numbers are a tool we take for granted. In print, they are used to keep track of where we left off so we can pick back up at the right point. They are also used as non-digital hypertext, like in a magazine where we see “continued on page 58.”

On the web, pagination has become something slightly different, but the metaphor carries over well enough to work for us. On search results pages, we now expect to see a pagination interface at the bottom of the search results to allow us to continue to the next page of 10 or 20 links. One difference on the web is that we expect those links on the first page to have higher relevancy than those on the following pages.

So, on the web pagination is an answer to a finding question, and is based on an underlying organizational system of quantity ordered by relevancy.

However, in this case, the list is ordered by time but paginated by quantity. In this case, people want to find by time, but quantity is not metered evenly against time. So, page 1 might have 50 entries that cover 5 seconds of activity, and page 2 might have 50 entries that cover 32 hours of activity. There is no predictability of how much time will be represented from page to page of results, and that is why people are left with so much guess-work.

Match the interface to the underlying information architecture and users’ information needs

In recent work, we’ve shifted to a time-based pagination (Figure 2) from a quantity-based pagination (Figure 1). We think this will go a long way towards helping people find what they want without having to guess.

Figure 2. Find-by-time instead of pagination.
Figure 2. Find-by-time instead of pagination.

I’ve observed a few users have their first contact with this revised interface, and it has worked well so far. We may have introduced other usability issues in the process, but this is a step in the right direction.

Moral of the story?

Before implementing a user interface design pattern, be sure you first understand the information architecture and users’ information needs. Otherwise you risk using the wrong pattern, hurting your users’ experiences, and missing out on an opportunity for innovation and good design.

Categories
User experience, web, technology

UserVue review

UserVue is an application from TechSmith. At work we’ve used it recently to do remote user interviews, where we’ve had people who use our services talk us through some emailed reports they received from us.

It allows us to view and record a user’s screen, and save it as a WMV or Morae file. Additionally, it can record a phone call you have with the user. And, you can have colleagues at other computers observe the session and they can take part in an observer chat and submit notes along the way.

There’s that saying, “Hunger is the best sauce,” and I think the user experience design community has been very hungry for a tool like this. So, at the moment, I’m quite happy with UserVue.

It was quite easy to use, and it worked well.

Now, to save others some frustration, let me tell you about how it didn’t work well.

When I first tried it, everything seemed to be going great. I conducted a 1 hour interview, and at the end it seemed to save the recording. Then, when I went to view the recording, I realized that the phone call was not included in the recording! The best part of that interview was, go figure, in what was said. I was distraught.

Why? UserVue only works on Windows. I was running Windows XP Pro in a virtual machine on my MacBook Pro laptop, using VMWare Fusion. Apparently, there is a problem with that configuration.

I tested UserVue on that same computer, but instead of in a virtual machine, I booted into Windows using BootCamp. UserVue worked fine that way, including recording the phone call.

Categories
User experience, web, technology

“P” is for parking–-you could’ve fooled me

"P" for Parking sign in Owosso, MI. The P is pulled out of proportion.
"P" for Parking sign in Owosso, MI. The P is pulled out of proportion.

The first time I noticed the green P sign in Owosso, MI, I thought it was trying to tell me that the road was going to make some weird loop back onto itself.

After a few seconds I suspected it might actually have to do with parking (which, of course, it does).

It was one of those mini lessons in typography, and yesterday I finally got around to taking a picture of it (thanks Tom for letting me use your phone).

The problem with the sign is that whoever designed it stretched the letter “P,” malforming it just enough where I, as someone new to this area, failed to immediately recognize it for what it is.

This photo was worth taking because it showed a standard “P” in the stop sign next to the malformed “P” in the parking sign.

Categories
User experience, web, technology

Note to self regarding “Blunder: Why Smart People Make Bad Decisions” by Zachary Shore

I recently finished Zachary Shore’s book “Blunder: Why Smart People Make Bad Decisions.” I think I heard an interview with Shore on a show on NPR and the lessons from the book seem important.

So, some time has passed, I’ve read the book, and before I pass it on to someone else, I feel a need to record some personal notes about it, in case I lose it.

The blunders (titles of the 1st 7 chapters of the book):

  1. Exposure Anxiety: The Fear of Being Seen as Weak
  2. Causefusion: Confusing the Causes of Complex Events
  3. Flatview: Seeing the World in One Dimension
  4. Cure-allism: Believeing that One Size Really Fits All
  5. Infomania: The Obsessive Relationship to Information
  6. Mirror Imaging: Thinking the Other Side Thinks Like Us
  7. Static Cling: Refusal to Accept a Changing World

From the last chapter, Shore mentioned 5 ways to prevent blunders.

  1. Mental flexibility
  2. Willingness to question majority view
  3. Rejection of reductionism
  4. Development of empathy and imagination
  5. Embrace uncertainty

I don’t have the time that writing about this book deserves, but in relation to user experience design, these lessons certainly apply and complement what I’m sure many UX pros already have learned. The historical perspectives in the book made it interesting and provided realistic narratives to explain the various cognition traps.

As a designer and a product owner in scrum, this is an important read. Advisors and executives should read this book, too.

There are some bits of information that I try to memorize in order to encourage my mind to recall them as needed. Some proverbs, usability heuristics, certain interaction design “laws”…and now these blunders I will try to add to this list.

Categories
User experience, web, technology

1st foray with svn:externals

Okay, confession. Since the mid-90s I’ve helped produce hundreds of websites. Yet, I’ve been using source code management software for less than 1 year.

Hindsight, right? In retrospect, I was just plain ignorant. Had I been using something like Subversion, I can think of a few big issues on past projects that just simply wouldn’t have mattered.

  • Before using Subversion: “Argh. I  just royally whacked 189 files in one fell swoop. Curses! When was my last backup?!”
  • After using Subversion: “Hrm. I just royally whacked 189 files in one fell swoop. Eh, I’ll just update from the prior revision and try again.”

Source code management irritant

I have a side project, rangelistings.com, built with the Nephtali PHP framework.

Updating the framework source code into my site’s code was trivial, but irritating. With each new release of Nephtali, I would upgrade. I’d do this by doing an export of the Nephtali source from a Google code repository and then copy and paste in the framework files to my working copy.

I couldn’t just drag in a directory because that would drop Subversion’s meta files from that directory and really mess up my working copy. Then I’d spend an extra half hour or so fiddling around to undo my screwed up Subversion copy. Very irritating.

svn:externals to the rescue

I knew about a feature in subversion called “externals,” but had no first-hand experience. I investigated and realized that externals could be the answer to this particular problem.

Here’s how I made use of externals. When upgrading Nephtali, I updated the files in a working copy directory /nephtali/src/NCore/.

  1. Since you can’t create an external for a directory that already exists, I removed the NCore directory from my working copy and committed that change.
  2. Using Versions, an SVN client for the Mac, added a property to the src directory (NCore http://nephtali.googlecode.com/svn/trunk/src/NCore/).
  3. Ran an SVN update on the src directory and, as though by magic, I suddenly had the up-to-date source of Nephtali’s core in my working copy.
Screenshot of Versions, an SVN app for the Mac

On my first attempt, I  followed an example  I had seen online and created a text document that had the svn:externals property in it, and then added the property ‘-F name_of_file.txt’.

That didn’t work so well. It created the folder, but failed to load the files from the remote Nephtali repository.

Once I put the local directory and SVN URL in the property itself, it worked like a charm.

Here are a couple other pages I used while looking into svn:externals.

Categories
User experience, web, technology

Thanks MS. But I don’t want to disconnect from the Internet.

That’s just a bizarre instruction. “You can now disconnect from the Internet.”

MS .Net Installation - You may disconnect from the Internet.
MS .Net Installation - You can now disconnect from the Internet.

I almost blew it too by clicking the “Cancel” button because (1) I thought it was all done and it was the only button shaped thing available, (2) I do not want to disconnect anyway.

Luckily, I noticed in time that the progress bar was still working.

I know Microsoft takes a lot of abuse for no reason other than that they are M$, but it’s just so easy with stuff like this.