Categories
User experience, web, technology

Revision: Using Javascript to add instructive text for input and textarea form fields

OUTDATED POST: HTML5’s placeholder attribute creates the behavior that this javascript did. Just use the placeholder attribute.

This is a code update to a prior post.

The Javascript I had posted earlier fell short in one important arena: when you submitted the form without entering your own text, the form would send in the instructive text. It should instead have erased that text prior to the form submission.

Further, in retrospect, the code was trying to do too much. It is enough to simply set a class of blurred and then remove it, instead of replacing it with a class of sharpened.

So, with that in mind, here is an updated set of Javascript that can be used to insert instructive or suggested texts into form fields, like text input fields or textareas.

The code

To start with, we need some scripts to add and remove classes values from an element. The one caveat when doing this is to remember that you can have many class names assigned in a single class attribute. So, when adding and removing class names, you have to be sure to leave pre-existing class names alone. Instead of writing these functions, I did a quick Google search for “Javascript addClass” and found these handy Javascript functions, hasClass, addClass, and removeClass, at openjs.com. I’ll let you look there for that code.

With those class manipulation functions in hand, I rewrote the setSuggestedFormText function down to this.


/* setSuggestedFormText takes 2 arguments, mode and field.
/ mode is either set to put in place the suggested text
/ clear is to remove the suggested text
/ the set mode also assigns adds a class of blurred
/ field is the value of the ID attribute of the input or textarea element
/ April 13, 2008, Davin Granroth, granroth@gmail.com.
*/
function setSuggestedFormText(mode, field){
var key = document.getElementById(field);
var defaultText = key.title;
if(mode == 'set'){
if(key.value == '') {
addClass(key, 'blurred');
key.value = defaultText;
}
}
if(mode == 'clear'){
if(key.value == defaultText) {
removeClass(key, 'blurred');
key.value = '';
}
}
}

 

This function takes two arguments: mode and field. The first, mode can be either of set or clear. The one puts in the suggested text from the title attribute of the element, and the other clears it out. Each checks to see if the value of the field has been edited, in which case it respects the user input.

The next step is to set up the function which triggers this function at the appropriate events. There are three events at which the function needs to be called: when the page loads, when focus is put on the field, and when focus is removed from the field. Here is the code for setUpSuggestedFormText.


/*
/ field is the id of input or textarea field
/ parent is the id of the containing form element
*/
function setUpSuggestedFormText(field, parent){
if(document.getElementById(field)){
// This will set the text on page load
setSuggestedFormText('set', field);
// These 2 events will toggle between the set and clear modes
document.getElementById(field).onblur = function(){
setSuggestedFormText('set', field);
}
document.getElementById(field).onfocus = function(){
setSuggestedFormText('clear', field);
}
// Clear the text before sumitting the form.
document.getElementById(parent).onsubmit = function(){
setSuggestedFormText('clear', field);
}
}
}

 

This setUpSuggestedFormText takes two arguments: the id of the field element and the id of its parent form. The id of the parent is used in the trigger for the form submission. I had considered using a parentNode property to find it, but depending on the markup in the form, it may be multiple parentNodes up. So, this is less code. However, if someone wants to write up the function to recursively walk up the parentNode tree to find the parent form’s id, I’d be happy to incorporate it.

The last step is to call the setUpSuggestedFormText once the DOM has loaded in the page. One slightly intrusive means of doing this is to put a Javascript setUp function just before the closing body tag. You can, of course, choose to use something like an addDOMLoadEvent (Google for that) function. Here is what the setUp function looks like in the Javascript.


function setUp(){
setUpSuggestedFormText('to','msgForm');
setUpSuggestedFormText('from','msgForm');
}

 

You can download sample code which uses an illustration of a message form with from, to, and message fields. The download is a ZIP file which contains an XHTML file, a CSS file, and a Javascript file.

Categories
User experience, web, technology

Using Javascript to add instructive text for input and textarea form fields

OUTDATED POST: HTML5′s placeholder attribute creates the behavior that this javascript did. Just use the placeholder attribute.

UPDATE 2008-04-13: This code has been refactored. Please view the updated posting.

Over the last year or so, I’ve worked on a number of websites where I wanted to add instructive text to form fields, but didn’t want that text to get in the way when a person actually tries to fill out the form. So, behavior-wise, the page would load with instructive (or suggested) text in the fields. It should have a class attribute that can be used to style it so it doesn’t look like a real value (grey and italic, for instance). When someone clicks on the field, the instructive text should go away and the class should go away so the default styling kicks in. When someone clicks off, the suggested text and class should pop back in, so long as the field is empty. The instructive text should come from the title attribute on the field (that made the most sense to me, given all the options).

So, here are a couple Javascript functions to do so.


function setSuggestedFormText(mode, field) {
/*
mode values must be one of: set, clear, or reset
set is to check the value on page load
clear is to clear the default value when someone clicks on the field
reset is to fill in the default value when someone clicks off the field
field is the value of the ID attribute of the input or textarea element
Use .blurred and .sharpened selectors in your CSS file
*/
var key = document.getElementById(field);
var defaultText = key.title;
switch(mode) {
case 'set':
if(key.value == '') {
key.value = defaultText;
key.setAttribute('class', 'blurred');
}
break;

case ‘clear’:
if(key.value == defaultText) {
key.value = ”;
key.setAttribute(‘class’, ‘sharpened’);
}
break;

case ‘reset’:
if(key.value == ”) {
key.value = defaultText;
key.setAttribute(‘class’, ‘blurred’);
}
break;

default:
break;
}

}
function setUpSuggestedFormText(field){
setSuggestedFormText(‘set’, field);
var node = document.getElementById(field);
node.onblur = function(){
setSuggestedFormText(‘reset’, field);
}
node.onfocus = function(){
setSuggestedFormText(‘clear’, field);
}
}

 

To use these functions, add them to your site’s Javascript library, and then call the following functions when the document is loaded. Google for adddomloadevent or you could probably use jQuery’s document.ready function.

Ok, usage. Let’s say you have an input field for an email address, like so:


<input type="text" name="email"
id="email" title="Your e-mail address" />

 

You would add the following javascript code to trigger the functions.


// Once the DOM is loaded, call this function
setUpSuggestedFormText('email');

 

Feel free to use it. If you make an improvement, please let me know.

To see it in use, try the search form on this page. For now, the Javascript source is at http://davingranroth.com/blog/custom.js

Categories
User experience, web, technology

Nephtali

Oh! Nephtali!

Alright, so if you look up “Nephtali” on thinkbabynames.com, you see it listed as a variant of “Naftali.”

The boy’s name Naftali \n(a)-fta-li, naf-tali\ is of Hebrew origin, and its meaning is “struggling”. Biblical: a son of Jacob and one of the ancestors of the 12 tribes of Israel.

“Struggling.” Apt.

How many phone calls from Adam have I received in the last year that began, “With Nephtali, if you were to do [blah blah blah], which of these approaches would you like the best?”

Countless. Every other day is a conservative estimate, considering that many days, there were many phone calls.

The man has been obsessively working on this “Nephtali.” Yesterday, in our ritual Nephtali conversation, he said, “So old man, it’s time to post this to your blog.”

After so many versions and so much work on Adam’s part, it’s time to break the silence.

What is Nephtali?

Well, I’ll give you my take on it, but for the practical details (and a download), go to nephtaliproject.com.

Nephtali is a framework for the development of data-driven websites. It uses PHP 5 and is object-oriented. It’s flexible, extensible, and plays nicely with other frameworks, like the Pear and Zend frameworks. Because of how it handles data, developers will tend to write more secure Web apps. It separates the presentation from the programming logic well, so that it is easy to use a tool such as Dreamweaver without worrying about messing up the behind-the-scenes programming. And, the API was written with a continual drive to make it user-friendly for programmers, while maintaining a value on security.

My experience with Nephtali, thus far

I’ve played with a couple iterations of Nephtali, and once getting oriented, I found I like it. (Granted, I’m biased having had so many discussions with Adam about it.)

Compared to CakePHP, Nephtali is easier to use. For instance, in CakePHP, or other Model-View-Controller frameworks (e.g., Ruby on Rails), database models are abstracted and then calls in the app rely on those abstractions. While there is a great benefit to this, I’ve found the abstractions themselves to be problematic. Nephtali can provide abstraction to fairly simple database actions (standard CRUD operations), but, gosh darn it, I like being able to jump in and write SQL. The reality for me as a developer is that database model abstraction causes me more grief than they are worth. Abstraction can slow me down, especially when the queries and relationships get tricky. Now, one of the big benefits to abstraction is that it can promote the DRY principle. Nephtali provides well for that, through implementing sub-classes, and I still have the direct, easy line to thinking about and working with the data in SQL.

The future of Nephtali

I could go on, but the proof for Nephtali will be in how many sites it is eventually used on, and how many other developers get their heads into it.

So, developers, check out Nephtali. And if you have questions or suggestions, do not hesitate to dialogue with Adam about it. And tell him to post a screen-cast of building a basic app with Nephtali.

For now, I’m going to see if I can get Adam to smash a bottle of champagne over his development box in honor of this release. Congrats, Adam!

Categories
User experience, web, technology

Can robots.txt prevent a dead page from being removed from a search engine’s index?

Problem: Pages taken offline 4 months ago are still indexed by Google and Yahoo!

In the course of work this week, we discovered that Yahoo! and Google still have record of a section of a website we removed nearly 4 months ago.

Surely the search engine robots had revisited the pages, repeatedly received 404-File Not Found errors, and proceeded to removed those pages from their indexes, right?

Apparently not. Here’s one explanation as to why.

Context: Using robots.txt in the process of removing old pages and posting new ones

I had also recently glanced at the robots.txt file for the domain in question, and noticed that there was a disallow rule applied to the pages that we had removed.

Four months ago, we released a new web service to replace one provided by these old pages. For a period of time, we kept both online with forwarders in place. During this time we added an entry to robots.txt to prevent spiders from indexing the old pages, and we encouraged indexing of the new pages by providing links to them.

Once we removed the old pages, we didn’t think to remove the entry from robots.txt. Afterall, the pages weren’t there, why would it matter?

Well, my revised theory holds that it does matter.

Hypothesis: Excluding the robot with robots.txt stopped it from even checking that the page was there. So, with no confirmation that the page was gone, it didn’t tell the indexer to update the records.

Here’s the scenario, from a search engine spider’s perspective.

(This model probably doesn’t technically match what’s going on in reality, but I hope it’s close enough to get some insight from.)

As a spider, I crawl links to pages, and when I find good pages (HTTP response of 200-OK), I read what textual data I can and send it back to the indexer to update our records.

However, if I try to get to a page, but the server tells me the page isn’t there, I send that as a note back to the indexer, and proceed to try to read the next page.

After receiving a few of these File Not Found messages over time about the same page, the indexer will remove that record from the index, as a matter of housekeeping.

Our problem may have been that we posted a robots.txt file which prevented the spiders from even trying to access these pages, and so when we had the pages removed, the spiders never had a chance to get the 404 error. So, they never communicated back to the indexer that there was a problem with the pages. So, the indexer never triggered its housekeeping activities and has left the pages referenced in its index.

Moral of the story: Don’t screw with spiders.

Had we left robots.txt well enough alone, the spiders would have found the bad links and soon enough the indexes would have been updated. Because we short-circuited their processes, we have preserved index reference to pages that have long since died.

Categories
Davin User experience, web, technology

XML file of shooting ranges in Michigan

As another small step in this process of manipulating a data set to upload to Google Maps, I took the cleaned XHTML I had from a few days ago, and used TextWrangler to do some quick search and replaces on the source code in order to produce this XML file.
ranges-data.xml

Next, I think, I’ll load this XML file into PHP using the simplexml features which will make it easy to run the data through a PHP-based GeoCoding processor that I’m sure I can dig up. The goal is to transcode the addresses of the ranges into latitude/longitude points, which seem to be required pieces of data for the KML file I’m trying to piece together.

I may at the same time output the whole thing into KML format, since I’ll be in there with the data nodes anyway.

Categories
Davin User experience, web, technology

Sample KML structure for the shooting ranges data

And here is a sample of what the intended shooting ranges KML feed will look like.

A couple notes:

  • the Placemark node will repeat for every shooting range
  • I’ll have to find a way to process the address information and generate latitude/longitude points—there are bound to be problems when the GeoCoder will have trouble parsing an address, though I’ve gone through this before on a prior Web development project
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://earth.google.com/kml/2.1">
<Document>
<name>Shooting ranges in Michigan</name>
<description><![CDATA[Places to shoot in Michigan: Public/DNR ranges, shooting clubs, and businesses with firing ranges available.]]></description>

<Placemark>
<name>Flushing Rifle &amp; Pistol Club</name>
<description><![CDATA[165 Industrial Dr., Flushing, MI 48433<br>http://www.flushingrifleandpistol.com/<br>]]></description>
<Point>
<coordinates>-83.866898,43.068909,0.000000</coordinates>
</Point>
</Placemark>

<!-- Repeat Placemark for each range -->

</Document>
</kml>
Categories
Davin User experience, web, technology

Clean XHTML of shooting ranges data

My goal is to upload a comprehensive list of shooting ranges to Google Maps (see prior posting).

Why? I just think it would be cool to visualize places to shoot in Michigan.

Plus, once they are in there, I can see next steps, like creating a custom map of just the ranges that host matches for the Central Michigan Rifle and Pistol League shoots.

So, to accomplish this, here are the steps I’ve thought of.

  1. Clean the source code from the NRA page of ranges in Michigan into a valid codebase that can be more easily parsed
  2. Create a prototype of the form that data needs to take to be uploaded to a Google Map (looks like a KML file will do)
  3. Write an XSL document to use to transform the cleaned code (#1) to match the structure for the KML doc (#2)
  4. Run the XSL tranformation and then upload the resulting KML document to Google Maps

Just for the record, here’s the cleaned source code (#1): 2007.12.16-shooting-ranges.html

Categories
User experience, web, technology

We need a “credit” attribute in XHTML

The XHTML 2.0 draft document by the W3C includes some promising attributes for elements. For instance, a navigation list could have a role with a value of sitemap. I.e.: <nl role="sitemap">

That’s cool. Think on that a bit, o ye of semantic persuasion. The potential benefits of this type of specificity in standard markup is great.

Now, that said, I was working on a site that I hope to launch tomorrow, and I would have loved to use an attribute like credit for image elements. It would be used to specify photo credits for a couple images I’m using, plus on some banners, I could have credited the designer who put them together.

It would look something like this: <img src="cool.jpg" alt="Illustration of a calico cat in a beret playing the saxophone." credit="J. Smith, Illustrator for Cool Colors, Inc." />

We could throw this information into the alt text, but it doesn’t really belong there, since the alt text is supposed to describe the contents of the image. We could also use the title attribute, but it would be nice to reserve that for slightly more pertinent information.

Today, I just added credit information in as comments in the markup. It was an adequate solution, I think, but will never be picked up by any user-agent.

Categories
User experience, web, technology

Diagram of XHTML, CSS, JavaScript as types of code in a web page

XHTML, CSS, Javascript as part of a web page
XHTML, CSS, Javascript: Cumulative aspects of a web page. Click the image for a larger version.

I’m thinking of using this diagram in an XHTML class I may be teaching in a couple weeks. The idea is to put XHTML, CSS, and Javascript in context with each other—yet to also illustrate that they are separate types of code and often are actually different files altogether.

Categories
User experience, web, technology

Why use CSS-based layouts?

I was prepping for a Dreamweaver class a couple months ago and was fiddling around with the practice files that came as part of the course material. There is a whole lesson devoted to using html tables as a tool to do page layouts. This poses an issue for me.

My primary job for the last six years or so has been to create web sites. It has been about three years since I did a web site that relied on tables for layout. Simply, there is no really compelling reason to continue to use tables for web page layouts anymore. Further, there are solid reasons to stop.

So, I reworked their basic practice file to use xhtml and css instead of html and tables. Here’s a comparison of the two versions.