Author Assurance: did I do this right?
I don't know any content team manager who wants his or her writers to work with frustrating, complicated tools. But improving the experience of using writing tools usually takes second place when it comes to setting departmental budgets. It loses out to investments with a clearer ROI. That's why, although I enjoyed many things about Rick Yagodich's book "Author Experience", I think the concept is hard to sell under that title. Every manager would like authors to have a pleasurable experience, but few can justify paying for it.
In fact, structured authors (and information workers in general) don't seem to expect a Google-like level of simplicity in their work tools. Comments on a recent survey suggest that older workers in particular are tolerant of software that may not be as easily usable as mass-market, consumer apps. Structured authoring has inherent complexity and, as Don Norman explains, a tool must reflect the complexity of the task, or risk frustrating skilled users. Such seemingly complex interactions as Adobe FrameMaker's document tree manipulation in fact speed up the structured authoring process and make it more accurate.
Every professional writer that I know wants to do a good job, and using specialized tools is often a part of that job. There's a problem, however, if the tools don't let them know that they've done their job correctly. The job is, of course, to convey the intended meaning to the user, within the specified structure, conforming to the relevant technical requirements. In web authoring, and particularly structured authoring, it's not always obvious to writers that they've fulfilled these requirements or, if there are errors, what those errors are and how to fix them. Too frequently, writers have to interrupt their work to ask a senior or technical resource, “Will this work?” or “Did I do this right?”
Meaning is conveyed not only through words, but also visual formatting. Automated and online publishing should take care of the formatting for authors, but often the tools abnegate their duty and expect authors to enter obscure codes or hierarchies of tags in order to achieve the required visual effect.
Online content is written in the understanding that users will have some sense of narrative flow when they read it. For sure, we can rarely assume that users will be with us for the duration of a book — more likely a page, or a section on a page, is what they'll consume at any one time. But even when reading that section, there must be some flow, and also some sense of what might come next. While structured content enables clever re-stitching of content chunks into appropriately personalized narratives, the authoring tools mostly do a poor job of informing writers what the possible relationships between the chunks they're writing might be, and what the implications are for users.
Unhelpful error messages have received a good deal of attention and scorn in recent years. In authoring, however, the more insidious errors in following templates or using appropriate codes aren't always immediately obvious. Sometimes it's only when content has been live for some time that a serious mistake is noticed.
This author frustration has a serious impact on productivity, content accuracy, and ultimately user experience. It costs our organizations, and so investment to address it is worthwhile. As tools designers, implementers, or authoring team managers, we have to do a better job of assuring authors when they're doing their jobs OK, and provide clear information on how to fix things if not. As Don Norman writes in general about software design:
To me, error analysis is the sweet spot for improvement. Usually, designers do think of the order in which activities will be done. But they seldom think properly about what should be done when the person encounters problems, or when the situation is novel…
We can do better. In the next couple of posts, I'll explore how.