LinkedIn UX groups, data and questions

Doesn’t it seem like there are a lot of user experience groups on LinkedIn? I’ve joined a few of them in hopes of staying up-to-date on topics, but after joining a couple groups, I quickly realized there were many more possible groups, and they all started looking pretty similar to me.

Why would I join this group versus that one?

Some are tied to specific organizations, like the Information Architecture Institute, the Interaction Design Association, or the Usability Professionals Association. Or like the Boxes and Arrows group, related to a specific industry publication. If you are a member of such an organization, joining the matching LinkedIn group probably makes sense in some way.

Some are focused on narrower subjects, like the Agile Experience group or mobileUX. If you have a narrower interest and find a group that fits, perfect.

Some differentiate by being localized. The UPA Israel, for instance, or London User Experience Professionals. Cadius is a group for UX people who speak Spanish. I think that’s fantastic.

But then we have all those other groups that ooze together, subject-wise. I’ll bet each has its own creation story, but at this point, the differentiation is slim.

Don’t these top 5 UX LinkedIn groups sound similar?

  1. User Experience
  2. Interaction Design Association
  3. UX Professionals
  4. UX Professionals Network
  5. User Experience Group

The second item is the group for members of IxDA, but the rest are simply professional groups for UX people. I’ll bet if you mixed together all the content and members of those groups you would first see a lot of repetition in members and topics, and second, I’ll bet you couldn’t separate them back into their original groups without a key. What does that say about these groups?

Some data on these groups

For what it’s worth, I’ll post some data I harvested while trawling LinkedIn this afternoon. (Why did I do this? Am I mad? No, but I’ve been sick all weekend, and in my addled state, cataloging some LinkedIn groups was the most obvious thing to do.)

The following data is merely what I found this afternoon. It is not comprehensive.

Chart showing membership rates of about 40 user experience groups on LinkedIn.
Chart showing membership rates of about 40 user experience groups on LinkedIn as of March 11, 2012.

Want a little more information? You can download an Excel spreadsheet I used while gathering this information. The worksheet includes columns for ID, Title, Membership, Parent Group, Created date, Type (e.g., Professional Group), Owner, Coverage (e.g., Earth, Greater London, UK, etc.), Language (didn’t fill that in), and Organization (e.g., IxDA).

Here’s the Excel file: User Experience (UX) groups on LinkedIn, March 2012 (.xslx)

Too many groups!

In closing, I think it would be easier and less time consuming to stay up-to-date in the field if there weren’t so many overlapping groups. What if some of these groups merged? Would people get too upset about that?

(Now for more tea and expectorants.)

My 2.5 days in San Francisco: MX 2010

Red stone church near green trees, surrounded by skyscrapers.
View from top of Yerbe Buena Gardens, San Francisco, March 2010.

Saturday PM: Sunshine!

I actually began to sweat under my blazer from the warm sun shining brightly through the window.

I had arrived in San Francisco a little early on Saturday, dropped my suitcase off at the Intercontinental Hotel, and walked around the corner to a sandwich shop for a bite to eat and to get online. As I draped my coat over the back of the chair, I decided I really like San Francisco. It’s the sun, I admit it. Oh, and I had already noted that the two billboards I noticed on the taxi from the airport were pure tech: one for an enterprise search system and another for PGP. Billboards talking to me? Amazing.

After settling in at the hotel, I had dinner with my old colleague Chris Burley and his girfriend at a nice Italian restaurant. Chris is awesome. I love talking with him because he has such passion for what he does, which currently is to help lead efforts like urban farming in the Bay area.

Sunday AM: 3 good things

The next morning I woke early due to the time zone difference, and I had three excellent experiences:

  1. In the aching fog of caffeine deprivation, had the best cup of coffee of my life, thanks to the Blue Bottle Café. (I admit, I ordered a second cup to go.)
  2. Paused in the Yerbe Buena Gardens where some elderly practiced tai chi and parents snapped photos as their little children hid behind a waterfall. I stood on a bridge and watched the morning sun ripple on the glass of San Francisco skyscrapers.
  3. Crashed a church service at a music venue called Mezzanine put on by a group that calls itself IKON. I was the oldest person there, amidst a crowd of art school students. We sang, we listened to a teaching from the Word, we had communion. It was good.

Sunday PM: MX day 1

Sunday afternoon saw the start of the 2010 MX Conference.

MX2010 is largely focused on managing user experience and less on the tactical end of UX practice, and there were some thought-provoking presentations from people who have been managing user experience for a number of years, in a number of different types of companies. Off the top of my head, presenters represented firms in financial industries (Vanguard), publishing (Harvard Business Review), retail sporting goods, and online media (Youtube).

The series of talks was fantastic, and was kicked off with a keynote by Jared Spool in which he shared insights like that Gallup’s Customer Engagement (CE11) metric has high correlation to the quality of user experience. Spool’s keynote actually turned out to predict some themes that carried throughout the many presentations. Among them were the importance of establishing a vision for user experience and that experience ultimately must be addressed well across multiple channels (web, mobile, physical space, etc.).

Spool talked about three core attributes necessary for great user experience: Vision, Feedback, and Culture. He posed three questions that UX managers should ask.

  1. VISION: Can everyone on the team describe the experience of using your design 5 years from now?
  2. FEEDBACK: In the last six weeks have you spent more than two hours watching someone use your design or a competitor’s design?
  3. CULTURE: In the last six weeks have you rewarded a team member for creating a major design failure?

After the conference reception, I wound down the evening by taking a walk around a few blocks and ending at a nearby bar. I ate a burger and watched the Academy Awards for a while. Back at the hotel I watched the end of a Clint Eastwood Western flick and fell asleep.

Monday AM+PM: MX day 2

I woke at 4 in the morning. I checked analytics, email, and my usual RSS feeds. I stretched, washed, dressed, and still had time to kill. I read a few chapters in The Shack, a book Adam gave me last week.

I chatted throughout the day with Haakon, a usability specialist attending from the design company Tarantell in Norway, and as he sipped his coffee, I decided to not mention my mere three hour time difference.

The rest of the day was another series of excellent presentations. Themes: customer (more than user) experience, vision that guides the business, new models for working in the network, UX leadership stories from Youtube, customer experience in renovation of thinking at Harvard Business Review Online, understanding the holistic customer, data-driven design decisions (and when not to rely on data for design decisions), experience design as business strategy, and operating as a chief experience officer in your company.

It was great to hear first-hand the stories from these user experience leaders. Now, for what to do with it all when returning to the office.

Tomorrow and then

Tomorrow morning I fly back to Michigan, and need to get my head back into product owner and user experience work. But I also need to hold onto the ideas from this conference, and shift into actively leading user (or is that customer) experience work at Covenant Eyes.

How to write release notes

I confess, I’m a release notes reader, and I’ve read some overwrought release notes lately. When you use them like an installation guide, a features list, or a list of software conflicts, you’ve got it wrong.

The purpose of release notes is simple:
Release notes explain what changed with this version of your software. Period.

I hope this article will help you write release notes with clarity and brevity.

Title format for release notes

The title for your document should include specific information:

  • Name of product
  • Version number

For example, if your product is RubberDucky and this release is version 3.3.5, the title for your release notes document should be RubberDucky 3.3.5 Release Notes.

Make the title big and bold at the top of the page. Refer to it in links exactly as the title reads.

Consider following the title with these bits of information.

  • One sentence overview of the product
  • Date of the release
  • System requirements
    • Note changes, like “Discontinued support for Windows XP.”
  • Link to installation instructions
  • Link to a user manual
  • Link to a release notes archive

Other sections in release notes

Break the release notes document up into sections, each with its own heading. Here are some sections to consider.

  • Additions
  • Removals
  • Changes
  • Fixes

Keep the actual descriptions brief. Release notes are often little more than a bullet list of updates, and that’s fine. If there are a series of small technical changes, try to describe them as a theme. For instance, “Improvements to the communication between the software and our servers.”

However, if there is an update that is important for users to understand, do not sacrifice clarity for brevity. Write enough of a description to explain the feature, but no more than necessary.

How do you know if an explanation is too short or hard to understand? Ask someone who is familiar with the software but doesn’t really know about the release to read the explanation and explain it to you in his or her own words.

What about personality?

Release notes should be easily scannable, and inserting witticisms should be avoided. However, if the company is proud of some feature, it doesn’t hurt to brag about it, so long as it is brief.

Here’s a nice example from TextWrangler 3.0 Release Notes.

Brag about it. Snippet from TextWrangler Release Notes.
BareBones Software inserting a little attitude into their release notes.

What about posting existing, known defects?

This question quickly becomes a philosophical one. In my opinion, a company should be transparent about known defects with their software and earnestly try to fix those problems. You can see this type of behavior with some open source projects in that they have a public defect tracking system. The bugs are out there for the world to see. With enough of a user-base, defects with your software will probably become known eventually anyway.

However, I do understand that in some cases advertising known-defects is a security and stability liability and just shouldn’t be done. My preference is that that sort of decision is made on a defect-by-defect basis, and not as a corporate blanket statement.

Regardless, known defects or incompatibilities do not belong in your release notes document. You could, however, link to a list of them in your release notes.

Organizing archival release notes

What do you do with all those release notes from prior versions of your software? Archive them on your website so you and your customers can get to them.

One page, all release notes

If your release notes are brief, you might want to include each version on a single page. The most recent release notes should be at the top of the page. Fetch Softworks currently takes this single page for all notes approach.

One page of release notes per version

For the sake of clarity, I would prefer to have release notes for a specific version on that single page, with a release notes archive page that links to every version. BareBones Software takes this index of release notes approach.

Some companies keep a current release notes page up-to-date so they don’t have to continue updating links. Again, BareBones follows this approach: http://barebones.com/support/bbedit/current_notes.html

Do you have good (or bad) examples of release notes?

Not having seen any “best practice” document for release notes, I wrote this article. Do you agree? Disagree?

If you have examples of great, or really awful, release notes, please comment with the web addresses so we can all see them. Thanks.

jQuery: Show password checkbox

I wrote version 1 of a jQuery plugin during the last couple of days. Read more about jquery.showPasswordCheckbox.js.

The basic functionality is to provide a checkbox on web forms to reveal the password text, so people can choose to view the password they are entering as they enter it.

Argh! I’m pen-less!

Photocredit: Tony Hall. Click photo to visit Tony's photostream @ flickr.com

Pen-less. It’s 9:30 in the evening, and I need to write out some thoughts (about a split-complementary color set).

At work last Friday, the pen that I’ve had with me for some months now finally gave up its last ink. It was a Pilot Precise V5, black.

My habit has been to have that pen in my left front pants pocket, reliably at hand. I guarded it, making sure to have it back if I let a colleague or a daughter use it for a moment. I gave other pens like it away, but kept that one.

Of course I have other pens. Bic ball-point pens: the kind you get in bulk in the plastic bags during back-to-school sales. I hate those pens. They fail so often, and you have to drag the ink out of them, scraping across paper. Scribble in circles first just to get them warmed up. Lazy bastards. Then you have to draw across your strokes again, filling in ink on the empty indentations of your first pass at writing.

I’m irritated at myself for getting into this pen-less position. Luckily, I have Plan B: pencils and a sharpener.

Nephtali web framework creator talks FP

Nephtali project website screenshotAdam Richardson of Envision Internet Consulting has been a long-time collaborator and good friend of mine, and over the last few years I’ve seen him pursue knowledge in web programming with persistence that I’ve never seen from anyone else.

One of Adam’s projects is Nephtali: a web framework that focuses on security and considers the usability of the framework itself. Adam has labored over details in his latest version of Nephtali that will make life better for developers. For instance, he planned the naming convention and namespaces for functions so that in an IDE like NetBeans, the functions appear grouped logically in an easy-to-access format.

Nephtali is up to version 3.0.5 at the time of this writing, and the earlier versions were completely Object Oriented PHP. In version 3, Adam re-thought Nephtali away from the OOP base and rewrote it utilizing FP, Functional Programming.

For the last month or so, Adam has been lobbying various hosts to upgrade to PHP 5.3 or higher, because Nephtali requires at least that version. It is right on the cutting edge. I asked Adam a few questions about Nephtali, and that dialogue follows.

Davin: Nephtali requires the latest version of PHP, version 5.3 or higher, but many hosting providers don’t provide that yet. What about PHP 5.3 is worth waiting for?

Adam: PHP 5.3 includes many enhancements and bug fixes, but the features that facilitated Nephtali’s general approach and architecture were support for namespaces and the new Functional Programming (FP) capabilities.

Davin: I’m familiar with object oriented programming, but you’re talking about “functional programming.” Can you summarize the difference, and explain why you decided to go with FP instead of OOP with Nephtali?

Adam: Most programming languages offer the ability to define functions, however that doesn’t necessarily make them functional programming languages.  It’s easy to to get into flame wars over what a “true” functional language is, but I’ll lay out some general principles:

  • Functions can be passed around just like other datatypes.
  • Closures allow variables that are in scope when a function is declared to be accessed and carried around within the function.
  • Side effects (changing the value of a variable within a function) are limited.
  • Many FP languages natively support currying (the ability to define a subset of a functions arguments and then allow other functions to finish defining the others.)

PHP now supports the first 2, and with some discipline, you can limit the impact of side-effects within your code (there are even some clever hacks for the currying issue.)  But the big question is, “What does this buy you?”

Simplicity.

Object Oriented Programming (OOP) bundles variables with the functions (methods) that directly interact with the variables.  This does provide a degree of encapsulation, as the accessor methods make sure that instance and class variables contain what is expected.  However, the issue often isn’t “What” a variable is changed to, but rather  “When” a variable is changed.  This problem of “When” is most glaring for OOP developers when implementing parallel processing, an issue that has produced many complex, clunky answers.

Taking an FP approach simplifies the question of “When”, as you move from a paradigm of altering variables to one of acting on values returned from functions.  Relatively speaking when following general FP conventions, writing unit tests is simple, writing parallel processing apps is simple (see Scala, Clojure, Erlang, etc.), and as it turns out, writing a capable web framework is simple, too.

Davin: What about models? So many of us in the web field have become familiar with the MVC (model, view, controller) architecture in frameworks, and it seems like Nephtali doesn’t use the models concept at all. Is that right, and if so, what do you do about databases?

Adam: Simplicity.

In terms of DB interaction, I like PHP’s PDO capabilities and security.  Performing simple DB work is easy in Nephtali, as you can generate code very quickly using the Nedit, the online code generator for Nephtali.  Nephtali provides some simple enhancements (functions that automatically table inserts, updates, and deletes; easy connection management; etc.), but you’re always working close enough to the basic PDO capabilities that it’s still very easy to perform transactions, connect to multiple DB’s, work with existing tables that don’t follow particular naming conventions, and whatever else your unique environment may entail.  One line of code is all it takes to grab a set of rows from a DB.

Second, utilizing the parallel processing capabilities of CURL, Nephtali provides some special capabilities for web requests.  A couple lines of code can retrieve a web request (in parallel with any other web requests) and format the retrieved data into whatever container (object or array) you’d like.

Davin: I saw the post on the Nephtali blog about Nephtali’s parallel processing for web requests. Can you explain when that would be useful, and when I should not run ahead and parallel process everything?

Adam: If you have a page that only makes use of one web service, you don’t gain anything.  However, if you have a page like Nephtali’s homepage, which makes a request to Google Code for the latest download and also makes a request to the WordPress blog for recent entries, you can gain a significant performance improvement by processing those requests in parallel.  Instead of ending up with serial calls to the two services (GoogleCodeRequestTime + WordPressRequestTime), the parallel request now equals the greater of the two requests (GoogleCodeRequestTime -OR- WordPressRequestTime.)

Nephtali handles the processing for you automatically.  Always use the request() and response() functions, and Nephtali will make things faster when they can be faster.  That’s it.

More about Nephtali

Learn more about Nephtali at nephtaliproject.com. When you’re there, check out the screencasts on using Nephtali. One of the great features on that site is NEdit, a tool that you can use to write up a lot of the code you’ll need for Nephtali pages.

Oh, and don’t hesitate to use the contact form. Adam loves talking with people about Nephtali, and I’m sure he’ll happily answer questions or respond to comments about the framework.

How WordPress falters as a CMS: Multiple content fields

WordPress is amazing and keeps getting better, but I want to be clear about an inherent limitation that WordPress has as a content management system (CMS). That limitation is that WordPress doesn’t handle multiple content regions on web pages.

Too strong? With WordPress, you can try to use custom fields or innovative hacks like Bill Erickson’s approach to multiple content areas using H4 elements in his excellent theme “Thesis”. Unfortunately, neither of those approaches really deals with the depth of the design problem that often requires multiple content areas for pages.

As an information architect/user experience designer, I’ve been involved in many projects that required more types of content on any single screen than WordPress is designed to handle.

Let me draw out what I’m talking about here.

Exhibit A: Page content that WordPress is designed to handle

In a standard WordPress page or post, you’ll see these author-controlled pieces of content.

  • Post/page Title
  • Body
  • Excerpt (often not-used)
Standard WordPress content fields include the title, excerpt, and body.
Standard WordPress content fields include the title, excerpt, and body.

There are other sets of data for a page or post that an author can control, too, but these are meta-data such as tags, categories, slug (shows up in the URL), and possibly search engine optimization information like title, description, and keywords.

For a normal blog, many online trade journals, and a lot of basic websites, this really covers the bases. The body contains the bulk of the content including images, video, and audio that can be intermingled with the text itself. This model is very flexible, and it has definitely proven itself.

Exhibit B: Page content that pushes WordPress too far

In 2009, there was a small project at work to develop the website Covenant Musicians, and because the person who would keep the site updated was already using WordPress, we made the decision to build this site with WordPress too.

Well, if you look at one of the destination pages for this site, the musician profile page (here’s one for example), you’ll notice some different pieces of content which may or may not be present on any particular musician profile page. When they are present, they need to be in certain places and sometimes with certain content.

This custom WordPress page uses fields in addition to the standard options: Musician Image, URL, and Video.
This custom WordPress page uses fields in addition to the standard options: Musician Image, URL, and Video.

The problem is, to control those extra pieces of content: the video, the band image, the link to the band’s website, the site owner needs to use WordPress’s custom fields in very precise ways, without the benefit of WordPress’s content editing tools. What a drag!

To make life easier for the site owner, we ended up recording screencast instructions on how to use these fields and delivered those help files with the site itself. (We used Jing by Techsmith, by the way.)

It would’ve been better had the interface been clear enough so that we didn’t feel the need to document the process of updating these destination pages, but that’s the trouble with stretching WordPress beyond its default content fields.

Ask too much of WordPress and ease-of-use is the casualty

Do you see the difference? When an effective design solution requires multiple types of content per page, using WordPress will actually make your website difficult to manage. WordPress is usually so easy to use that when you hit this wall, it is very apparent.

When you’re at that point, WordPress is probably not the right CMS to choose.

Should WordPress improve in this area?

Whether through the core application or through an excellent plug-in (is there one already that I missed?), if WordPress is going to grow in the content management systems field, this shortfall will need to be addressed.

However, WordPress is really excellent at what it does already, and the better course might be to decide to keep the features in check and let other systems compete in the mid-to-enterprise scale CMS arena. Scope creep never stops, and a good application strategy knows when to say “no.”

Am I wrong?

Am I off-base here? This is just one aspect of WordPress that should limit its use. Another that should cause designers to think twice is when dealing with faceted-navigation which requires more than one dimension (tags can probably handle one dimension). But, again, those are more complex design requirements.

I’m not a WordPress consultant, and I’ll bet some of you would like to point to the errors in my thinking. Let’s hear it.

Experience theme for Covenant Eyes

Cindy Chastain’s article, “Experience Themes,” at Boxes and Arrows outlines a neat way to package the concepts that help user experience designers put creative work into context.

When I was leading many design/development projects at a time, I’d write a creative brief for each—it helped me and the team stay clearheaded about each project. An experience theme seems like an alternative to a creative brief.

The following thoughts apply Chastain’s article to my work at Covenant Eyes.

Covenant Eyes is rich with stories

At Covenant Eyes, Inc., we have a full-time blogger, Luke. As I see it, Luke’s job is to draw out the stories surrounding Covenant Eyes and to share them using the Internet. He’s our storyteller.

What are the roles? There are so many stories, from people in so many places in life.

  • husbands, fathers
  • wives, mothers
  • children
  • pastors, rabbis
  • counselors
  • porn addicts, recovering porn addicts, people who have beaten the addiction
  • and the list continues

What are some theme concepts?

  • For people fighting a problem with pornography: Learn to be honest again (These words come from Michael Leahy’s mouth while he was visiting our offices.)
  • For mothers with children who use the Internet: Protect my family
  • For fathers with a teenage son: Teach him to be responsible for his actions

Experience transcends our services

What work do we do at our company? Although others I work with may claim we deliver software, I think we deliver information. Our software allows us to provide information-rich reports on Internet usage that can be used within relationships. I think of these as “accountability relationships.”

The theme concepts listed above have little to do with software or even our service. The real value we provide is that we can provide the sense for people that what could be their little secret is not actually hidden. That little bit of knowledge has proven its ability to change lives, and relationships, for the better.

The hard part is carrying the experience theme across our touch points with users

I recently helped put together a spreadsheet to inventory the automated emails we send to users at various points. There were over 60 emails, and they fulfill needs ranging from billing concerns to helpful reminders after a few weeks of being a customer. Many of these messages should be revised, and keeping the theme in mind will help create a coherent experience for our users.

Covenant Eyes has multiple touch points with its users.
Covenant Eyes has multiple touch points with its users.

Beyond these emails is a myriad of other touch points:

  • sign up form
  • help documents
  • filter settings controls
  • accountability reports
  • tech support phone calls
  • blog posts
  • and so on

Taken all together, these communications can benefit from an experience theme.

I suspect the key to pulling this off is to have all those involved with crafting these touch points understand the experience theme and leave it to them to carry it through. As the company’s user experience lead, my job may be to facilitate the definition and adoption of an experience theme, and motivate and lead by example so others will carry the vision.

Seams between systems and the Vignelli NYC subway map

I just read “Mr. Vignelli’s Map” by Michael Bierut over at Design Observer. In the post, Bierut remembers and analyzes why the public rejected Vignelli’s map of the New York City subway system. (Here’s the Vignelli subway map.)

The Vignelli map smartly acknowledged that for passengers of the subway focused on navigating the subway system itself, above ground geography was nothing but a factor of added complexity. So the map instead was oriented around the subway lines and stops themselves, abstracting actual geography. This was a keen simplification from an information design perspective.

But consider this observation from Bierut’s article.

To make the map work graphically meant that a few geographic liberties had to be taken. What about, for instance, the fact that the Vignelli map represented Central Park as a square, when in fact it is three times as long as it is wide? If you’re underground, of course, it doesn’t matter: there simply aren’t as many stops along Central Park as there are in midtown, so it requires less map space. But what if, for whatever reason, you wanted to get out at 59th Street and take a walk on a crisp fall evening? Imagine your surprise when you found yourself hiking for hours on a route that looked like it would take minutes on Vignelli’s map.

The concept of designing the seams between systems has become apparent within the user experience design community over the last couple years. This is an example of that problem of seams.

Passengers of the subway system are also navigators of the city itself, so their context of use spans beyond the subway and the end of their decisions are not merely which stop to get on and off of, but where they are going once they get out of the subway.

Bierut makes the point:

The problem, of course, was that Vignelli’s logical system came into conflict with another, equally logical system: the 1811 Commissioners’ Plan for Manhattan.

How can designers consider the seams between the subway system and the city plan to result in a better-designed subway map?

NYC, of course, has a functioning subway map. Is functionality the only litmus test?

(I’ve taken the subway in New York City only once, and managed to get from Point A to Point B successfully, although with some anxiety.)