I’ve applied for a speaking position for Relating Systems Thinking & Design 2013 at the Oslo School of Architecture and Design. My submission is toward the wonderfully punctuated theme of “teaching (systemic design or), system thinking in design (or design in system approaches).”
Here is my abstract:
The goal of this talk will be to communicate the approach and outcomes of a 16 week studio course developed over the past two years to teach systems thinking to undergraduate interaction designers at the California College of the Arts (CCA) in San Francisco. The plan for the final presentation is to provide symposium attendees with a narrative describing the thinking behind the course design, learning objectives, and activities as well as a report of my experience teaching and revising the course over the past two years. The value to attendees will be as a model and case study of one successful (as judged from student feedback) approach to relating system thinking and design.
The course was developed as part of starting up an entirely new undergraduate program in Interaction Design at CCA. The vision with which I won the commission to originate the course framed the learning objectives as a growth process beginning with learning to see the systems around us, moving on to being able to model them conceptually, and closing with how to produce change in them, both in a general sense and within the context of interactive experiences.
My first step in translating this vision into a workable syllabus was an extensive online search of the public web for like syllabi. I found essentially zero publicly available references that addressed my proposed framework directly. This was when I realized I actually needed to design this design course!
I saw this innovation challenge mainly in term of creating a new, dynamic narrative for existing formal system thinking constructs that would make these ideas relevant to the digitally native sophomores (roughly 19 year olds) taking the course. Therefore, I opened the course with a lecture in which I use a ‘ripped from the headlines’ method to demonstrate both how designing interactive experiences in today’s world requires an understanding of systems and also how to use observations of events to see a system in terms of patterns, structure and containing context.
After setting the stage to begin a formal presentation of system thinking theory I pause for us to read out loud a love story: Arcadia, a play by Tom Stoppard. This masterpiece packs a doctorate’s worth of system thinking constructs–including time, structure, entropy, fractals, abduction vs induction and probability–into a 2.5 hour experience. But most importantly it provides a mechanism for me to communicate to the students the critical role that emotion plays in human systems. This becomes the implicit frame for the rest of the course.
Next we turn to the fertile time just after World War II in which thinkers at Massachusetts Institute of Technology more or less simultaneously developed Information Theory, Cybernetics and System Dynamics. We begin the conversation with System Dynamics because it is the most intuitive and least mathematically intimidating of the three. We use Donella Meadows’s wonderfully accessible “Thinking in Systems” as a basis for studio discussion and activities that bring to life the idea of a system as made up of stocks and flows. From this base we expand the definition of a system to “a structure of relationships between elements that accomplishes a purpose and is regulated by feedback.” This provides a frame to talk about system archetypes and the importance of patterns as a tool for system analysis.
The general behavioral patterns introduced by archetypes provided a segue to the formal treatment of regulation and control offered by Cybernetics. The main goal of this part of the course is to provide students with a way to see certain kinds of man-made systems as machines with inputs, outputs, predictable behaviors and discrete states. This ‘mechanical’ understanding of systems becomes a conceptual a bridge between the the initial ‘seeing systems’ material and the next phase of the course that I call “designing the digital machine.”
In this phase we use the idea of a digital machine to create a bounding framework for the application of modeling as a tool for designing interactive experiences. I introduce 5 specific models–conceptual, persona, data, object, interaction–and teach how they fit within the model-view-controller framework that describes software flow control, or the digital machine. We close out this phase of the course with a major project in which students develop models of a system with feedback and then use those models as the basis for a software interface.
The narrative for the final phase of the course uses the framework of Rittel’s Wicked Problem to create a launching pad for the students into an optimistic and aspirational future. The story we tell is that Rittel defined the wicked problem to explain the increasingly obvious failure of capital D Design as a top down planning exercise to solve social problems. This in turn led to a roughly 50 year period in which design played a minor apolitical role of delivering delight at an individual level. However, (the course narrative continues) for the interaction designers of today a new opportunity has arisen to combine an understanding of systems with the time compression of high bandwidth connectivity, device diversity, cloud services, and social networking to create bottom up mitigations for social problems. Students are then asked to work in teams to design an information system focused on mitigating communication challenges within a particular wicked context: in the first year we looked at the distribution of organic produce; in the second, the fishery supply chain.
What is Digital Machine Theory?
It is a method I am developing to teach systems design to interaction design students.
I believe the theory demonstrates how to connect classical systems theory of the Forrester/Meadows school to the practice of interaction design.
The primary assertion is that interaction design may be understood as the design of a digital machine defined in terms of cybernetic principles.
I am not familiar with the terminology used to describe Digital Machine Theory. Do you have a plain language presentation?
Why yes, I do!
I have prepared a series of 6 short lectures that explain the theory in plain language:
- What is a system?
- From System to Software
- The Conceptual Model
- The Interaction Model
- The Object Model
- The Data Model
What is the value of digital machine theory?
I propose the theory has value as:
- a theoretical framework to train individuals in the basics of software design practice
- a communication framework for planning the design or redesign of a software application
How did you come to develop it?
It is an outcome of my work as adjunct professor at California College of the Arts Interaction Design Program. I teach a course called Systems to undergraduates.
Why are you calling it a beta release?
Because I am a nerd. Also because I feel like beta sets an appropriate (in my world, at least) expectation for the quality level of both the theory itself and the materials I have put together to explain it. I think it has some good bones, but the whole thing needs some banging on still. Do so and send me comments, timsheiner at gmail.com.
Aside from “which is better for wireframing, Illustrator or Omnigraffle?” the most common perennial question on UX design forums and list serves is some variant of “how do I convince management to invest in design?”
The reason this question comes up again and again is that the answer is not what anyone wants to hear: it can’t be done.
You simply cannot convince someone with words that design (or design research) is a valuable approach to solving problems.
The only way a person comes to believe in the value of design is to feel it for themselves. He or she must personally go through the transformative emotional experience of watching the human centered design process do the magic it does. Nothing else works.
This would seem like an insurmountable problem: the only way to get a sponsor to support design is to have him go through the experience of the thing that he won’t support.
OK, it is a tough problem, but it is actually not insurmountable. Here are the three approaches that have worked for me.
A fact of human nature is that, except for psychopaths, no person can ignore someone else’s emotions when confronted with them. This means that one way to get a design skeptic to give it a try is to get someone else to share with him positive emotion about his own personal experience learning about the value of design. Of course, getting the testimonial, and getting it in front of the skeptic in a format they can consume are separate challenges you must solve.
The words “pilot project” when wrapped around risk, give all stakeholders a way out should the thing go south. You still have to find executive sponsorship for your ‘pilot project’ but this is more likely to happen when that executive can explain it upward and downward as a known and contained risk.
As An Alternative to Bupkis
In this approach, you must first identify something that is well and truly broken that design might fix. If you prove consensus that you’ve identified a real problem, and particular one costing money and time, then your new approach of design, risky as it sounds, looks like a reasonable gamble in the face of no alternative plan for an ongoing and worsening situation. Of course, this one is something of the nuclear option: you fail here (and if the problem is that hard you very likely may), you do not get a second chance.
A very short post to share a simple formulation I have developed for explaining the relationship between the models I find most central in developing interactive systems.
First, a high level statement presenting the models and their relationship:
<business model><interaction model><system model>.
My meaning with respect to the “><” is that you can read this statement from left to right or right to left, or start in the middle. This is because all the models are connected through feedback loops.
Your choice about where to begin depends on what you know initially, what most interests you, or what direction you want to drive change.
A presentation of the overall system of models in a tree, with more detail about their respective composition is this:
- Business Model
- Value Proposition
- Cost Model
- Revenue Model
- Interaction Model
- Conceptual Model
- Object Model
- Data Model
- Error Model
- System Model
And there it is.
I am teaching my systems course at CCA for the second time this spring. Student feedback from the first time was really positive, and I learned a great deal that I’ve built into this updated version.
At the high level, I’ve broken down the course like this:
- Weeks 1-4: Classical System Theory
- The Emotional Content
- Classical Systems
- Weeks 5-9: Systems and Software Design: Conceptual, Object & Data Models
- Conceptual Model
- Object & Data Model
- Error Model
- Interaction Model: La Enchilada Completa
- Application One: Final Presentation
- Weeks 10-15: The Nature of Wicked Problems & The Opportunities for Software Mitigations
- The Web Interface Context
- Wicked Problems
- The Organization as Context
- Software Mitigations
- The Application as Narrative
- Application Two Final Presentation
For all the details, here is a link to the full syllabus:
If you are a homeowner in San Francisco, then you have had this experience:
You wander upstairs, though a door you’ve never seen before and find an entire room in your house that you always knew had to be there but had never been able to find before. You are thrilled at how this space will improve your quality of life. Then you wake up, realize you were dreaming, and remind yourself how charmed you are by the cozy victorian in which you live.
Manifesting the Dream
My family of 5 shares 3 bedrooms, 2 baths and 1500 square feet. By world standards this is exhorbitant and I’m not complaining, but my twin teenage boys were feeling pretty crowded sharing a 120 square foot bedroom. As one of them put it, “we’ve never had any privacy, not even in the womb!”
This summer I decided to try to find my boys some privacy by turning that dream of found space into reality.
Here is how it came out.
First, you walk into the bedroom and see two John Malkovich-sized doors, one on the north wall and one on the south.
Open one to a glow of natural light.
Poke your head in to a clean, cozy space with lights, outlets a skylight and enough space to lay out comfortably.
Turn and shut the door behind and you are in a quiet, peaceful and very private space.
Step 0: The Space
Where did I find that space? Well, I took advantage of an odd detail of my house. As you can see called out in the photo below, the bedrooms upstairs do not occupy the same footprint as the lower floor, leaving these triangular shaped spaces on either side of the bedrooms.
The spaces on both sides had access already, in one case through a cupboard in the bathroom and in the other through a closet. This was actually pretty important because it permitted me to get in there and inspect the situation, convince myself the resulting finished space would be useful, and plan my approach, all without having to do any demolition first. Here is what the space looked like before the project began. Pretty clear why they call it ‘unfinished’ space.
From the inside these spaces were about 10 feet long, and a little more than 5 feet wide with the roof sloping up at a perfect 45 degree angle to join the wall at the same distance of just over 5 feet high. Cozy, sure, but finished out, this would be plenty of room for a twin mattress, a small bookcase and a few other personal items. I decided that if I committed to also putting a skylight in to both spaces, these would end up being functional and pleasant private spaces for the twins.
Step One: The Rough Openings
The first step to the project was to cut the rough openings in the walls where the final doors would go. This was essential because, of course, eventually I’d need the doors, but more importantly it would let light and air into the spaces so I could work in there without suffocating.
Here’s a first view where I’ve already pulled the baseboard. I managed to get the baseboard out intact and was able to re-use it at the end of the project.
In this view, I’ve put down rosin paper to protect the floors. The trick here is to first lay down the removable blue tape, and then use duct tape on top of that to hold the paper down. That way, when the project is finished, you can just pull the blue tape up and duct tape and paper comes with it without hurting the finish of the floor. I’ve used this technique in the past with great success but the lesson I learned on this project was stick with the 3M branded products. In this case I used off brands of blue and duct tape from Lowes much to my regret. The duct tape didn’t stick very well, so I had continually re-attach the rosin paper and the blue tape stuck too well, marring the floor finish when I eventually removed it.
In this view I’ve framed the rough opening from the inside, but have not yet removed the lathe and plaster. I had to cut through several studs and wrestle them out to create an opening wide enough for a doorway: lots of sweating and cursing.
Here you can see more clearly the studs I had to cut through resting on the header I made by doubling up 2×4′s. I thought about making the headers out of 4×6 stock but I did some rough load calculations and convinced myself that though perhaps a little lightweight, this approach would be strong enough to support the few feet of wall above the opening and the associated roof load.
The next step was to remove the lathe and plaster in front of the newly framed openings. I laid out the edges of the openings with blue tape, both to serve as a guide to cut along and also to help hold the plaster left behind in place. The best way to proceed is to take a utility knife and, beginning with light pressure, score the plaster repeatedly until you cut all the way through it to the lathe underneath. You will go through plenty of blades in this process.
Once the plaster is cut all the way through, you strike the center with the flat a hammer and watch in satisfaction as the material tumbles to the floor. Pull the remaining bits off the lathe, bag it up and get it out of the way. It is always a shocker how heavy and horribly dusty this material is. Even in an open room with good ventilation, you will want to wear a cartridge dust mask for this phase of the demolition.
Once the plaster is removed, it is a simple matter to cut away the lathe with a sawzall or jigsaw, using the rough framing to guide the blade. And there you are, access to the space is complete.
Step Two: Electrical
As it happened, in both cases there was already wiring running through the spaces so the only electrical work I had to do was tie into the existing circuit and locate junction boxes for the outlets, light and light switch that I needed. This work is not complicated, but definitely something where you want to be shown how to do it by someone who knows what he is doing.
On one side I had a Romex circuit to tie into, but on the other I had to junction into the old knob and tube wiring. Here is a picture showing some of the porcelain ‘knobs.’ The tubes are also porcelain and are used where the wire has to pass through framing. People often have a reaction to this kind of wiring as if it is more dangerous than the modern version using Romex, but I think this perception is incorrect. In fact, this older approach is probably more robust than the modern standard, just as the old 2×4′s shown in this photo that are a) actually 2 inches by 4 inches and b) made of old growth, vertical grain douglas fir are more robust than the modern 2×4 that is actually 1.5 x 3.5 inches and made of soft, young wood. Similarly, while Romex places the conductors directly next to each other separated by a thin sheath of rubber insulation, in the older wiring style the conductors are physical separated by several inches, often up to a foot, and covered in two layers of a heavy woven insulation material and located to the framing with ceramic insulators. Really it is that the older approach turned out to be overkill, and modern codes provide for a less expensive, less material intensive way to build.
Here the old and new are married together in a junction box.
While the final lighting was going to be halogen track, once I had the box in place to feed the lights and the light switch I was able to install a temporary work light.
Step Three: Subfloor
It turns out, unsurprisingly, that climbing around on joists is awkward and painful to feet, hands and knees, so I was very excited once the electrical was done and I could install the subfloor. First I laid fiberglass batt insulation down between the joists. As the spaces were directly above insulated space below, the purpose of this insulation was more to muffle sound than control temperature. I moved the blown paper fiber insulation already in the space out of the way of the fiber glass insulation by scooping it up and using it to fill uninsulated areas behind the building facade.
Here the subfloor is going in. An amusing anecdote is that the plywood for the subfloors was recycled from sets used at Armory Studios, a porn production facility.
Step Four: Skylights
One piece of the project that made me a bit nervous was the skylights. However, it turns out that except for having to cut a hole in a perfectly good roof, this is not such a challenging thing to do. First, you frame the opening.
Then you make that hole in the otherwise perfectly good roof.
Then you build what is called a ‘curb,’ which is really just another frame on the outside that mirrors the one on the inside.
At this point you need to cut back the shingles around the curb to make room for the new underlayment to seal against the roof sheeting. And here is where I brought in a roofing professional to finish the job by weaving the flashing and new shingles in around the curb to ensure the installation was water tight. Finally, the skylight itself is set onto the curb, screwed down and skylight installation is complete.
Step 5: Rigid Insulation
This step was simple enough conceptually although a fair bit of work to do: install as much rigid insulation as I could fit between the rafters against the roof and the studs behind the facade siding. In most cases this was 4 inches of insulation (remember those old fashioned 2×4′s that are actually 4 inches wide?), but in some places fitting 4 inches of insulation into a space that was nominally 4 inches proved too tight and I settled for just 3 inches. I used blue foam that was 2 inches thick and yellow foam with a foil backing that was one inch thick. It was helpful in creating the fits to have the two thicknesses to work with. This material was rather expensive, but I consider the decision to include it in the project to be one of the better that I made. This insulation made a tremendous difference in the internal environment of the spaces both in terms of temperature stability and noise intrusion.
Step 6: Debris Removal
At this point I’d generated about all the nasty debris I was going to create so it was time to take it to the dump. Here’s a picture of the material staged in my garage in the way to being loaded into my van.
And here it is at the dump. Believe it or not but that is 1000 pounds of debris. The old plaster and the roofing material are really, really heavy.
Step 7: Sheetrock
I had originally thought I’d do the sheetrock, but in the end, I decided instead to hire it out. The thought of carrying all that rock upstairs, cutting, recutting, taping, mudding and sanding by myself kind of took the wind out of my sails. Paying for this work was my largest single expense at $1500 dollars but I think it was money very well spent. Just avoiding all that horrible dust made it worthwhile. Plus I knew those guys would do a better, and faster job than I could possibly do myself, even though I only paid for a quality level 2 below their highest offering.
Step 8: Flooring
For the flooring solution, I used the least expensive snap down product with real wood veneer available at Lowes. I was completely satisfied with the quality, appearance and the ease of installation. It really does just snap together, and as long as you have your cut off saw nearby so you waste as little time as possible making your cuts, you can lay it down very quickly. Each of these spaces 50 square foot spaces probably took only about 3 hours to do. One trick I learned that is worth passing on is that it turns out when you have a piece that won’t lay flat, you want to tap it in rather than down. As it goes in, it pushes itself down.
The only challenging detail I had to deal with for the flooring was how to end it at on the side where the roof met the floor. I was proud of the solution I came up with. I cut blocks with a 45 degree angle on the top, and a height that was just right so that they would be covered by the baseboard I planned to apply after. I cut a short piece of flooring that I locked in under each block and then screwed the blocks through the sheetrock into the rafters. This made for a tight fit that held the flooring down, but allowed it to expand and contract, and also provided a nice nailing surface for the final baseboard trim.
Here is the appearance with the baseboard trim in place.
Step 9: The Doors
The doors into the spaces were a standard width (30 inch) but a custom height (48 inch) so I needed to make them myself. The material I used for the rails and styles was something my local lumberyard, Beronio’s, calls ‘house reds.’ This is a pre-primed exterior trim product made from finger jointing cedar scrap together. As such it has a density and feel much like the redwood used in the original doors in my house. For the panel I used a 1/2″ plywood product with a name I can’t remember, but it has kraft paper glued to both sides so that it paints smooth and even.
I make a simple floating panel door where the dado for the panel is also receives the tenon at each end of the stiles. Here are the machined parts for two doors sitting on my table saw.
Here is the glue up for one of the doors. I’ve learned from unfortunate past experience that it is critical to clamp the door flat at the same time you are clamping the frame together.
And here are the finished doors. It doesn’t show too well in this photo but I added an ogee molding around the edge of the panel on both sides for appearance.
Once the doors were done, I made the door frames and installed the doors in the frames with the hinges before bringing the assemblies upstairs to mount in the openings.
Step 10: Installing the Doors and Final Trim
Just before I began this work, a contractor friend of mine visited my job site and in an offhanded way said, “wow, you’ve still got a lot of work to do.” I thought he was mistaken as I’d already completely finished and insulated the rough space, added the electrical, gotten the sheetrock and floors in and made the doors and jambs. All that was left was installing the doors and the final trim, maybe two days, max, right?
Wrong! All together this final finish work took me more than a week. First thing, when I got back upstairs I realized I’d forgotten to think clearly about how the door thresholds would work, and so I ended up having to remove a fair bit of the old sill plate between the original bedroom and the new spaces.
Then came hanging the doors. This is where I learned the lesson that rough openings should, if anything, err on the side of being too large. Or actually, a rough opening can’t be too large, it can only be too small. And if it is too small, you won’t get your door hung true. Suffice it to say that after considerable cursing, re-adjusting framing, and adding and removing pieces of sheetrock, the doors are in true enough.
Here’s the setup I was working with in terms of having my chop saw right there for cutting the trim pieces. This close proximity between the work and the tool saves a ton of time.
After the door frame was in, I set the thresholds down. My technique was to use a forstener bit to make a pilot hole, then drill it through, set the screw and plug the hole with a plug made with a plug cutter. The tail part of the threshold is actually a separate piece that I made which is located to the floor and moves in and out just a bit as the floor expands and contracts from temperature change.
Here is the final trim from the inside. I used simple casing material for the interior side and manufactured bullnose trim to cover the old sill plate.
On the bedroom side, I used casing I’d salvaged from an earlier remodel to recreate the same trim style as surrounds all the doors in my house. The casing is a really nice symmetrical style that for some reason is no longer available as a standard profile, even from the San Francisco victorian trim speciality house.
Step 11: Paint and Final Hardware
I never planned to paint the project myself, and so it felt like I’d reached the end of my project when it was finally time to call in my painter. He worked with my twins to pick some very appropriate colors. For the sleeping slot on the north side, with less direct sun, the boys picked a yellow and for the south slot, that gets sun most of the day, they chose a nice blue.
Here’s an example of the beautiful reproduction hardware I purchased from Rejuvenation.
Here’s the final out of pocket expense for the project.
This price tag does not include anything for my labor. The project took about 3 months, and for much of this I was working on it 2 days a week, so let’s say I spent roughly 30 man-days on it. Conservatively, that’s probably about $15,000 worth of labor for a total project cost of about $23,000 or about $230/square foot.
The truth is that my twins are extremely pleased with their new spaces, and overall it has been a really good thing for family harmony. But at that price, you can understand why, for most San Franciscans, finding that unused space in their house remains just a dream.
(special thanks to my buddies Tom Ehline, Adrian Burns and David Zapata without whose help things would have taken longer and been a lot less fun)
UPDATE – 10/21/12
One day after including an @zipcar in a tweet about this post, I received a call from Veronica at my local Zipcar office.
She explained that, being in my local office, she had a bit more control over my account than the person I’d spoken to previously at the call center.
She offered to re-instate my account to honor the non-refundable year subscription I’d paid for, and also to set it to automatically expire, rather than roll over, on its anniversary.
I accepted her offer. The romantic ride up the coast in a convertible may yet happen.
I just recently quit my subscription with Zipcar.
This was not because of a problem. Prior to ending my relationship with them, I had no issues with their product. In fact, I was kind of impressed. I’d thought the sign up process was pretty slick and efficient. I had used a car one time and found the experience simple and satisfactory.
But then my situation changed. For various reasons, totally unrelated to the Zipcar experience, my wife and I decided it made sense for us to buy a second car. With this car, I was no longer going to need a Zipcar. And even though I’d committed to a year subscription only a few months before, I decided that, while I was thinking about it, I’d cancel my subscription so that I didn’t forget about it and have it automatically renew.
I assumed that I could cancel online.
I had to call a number and deal with a person whose job it was to convince me not to quit. She was reasonable, and polite, and got efficient when I made it clear my decision was made. But still, this didn’t feel very good.
I had assumed that even though I was canceling, as I’d paid for a year’s membership, I’d stay a member until the term ended. I had visions of splurging on a sexy convertible for a drive up the coast with the wife or something.
My ability to rent a car was terminated as soon as I cancelled the subscription, even though I’d paid for a year already, and no refund was coming to me. That didn’t feel good either.
I had assumed that as soon as I quit my subscription, I’d stop getting the annoying promotional emails from Zipcar that I’d never opted into in the first place.
None of these behaviors are entirely unreasonable, nor is any one of them particularly egregious.
However, they do tell a sad story.
The narrative arc of this story is how a person, me, who had a really positive brand image of a company he thought was cool and innovative but from whom he no longer needed service is taught, as the door hits him on the way out, that said company has a sales-driven corporate culture that understands customers only as individuals from whom the company is currently making money.
My feelings were hurt, but I’ll get over it.
However, if I ever decide that I, or perhaps my soon to be driving teenage sons, need a car sharing subscription, I’ll look somewhere other than Zipcar.
I took BART from 24th and Mission in San Francisco to 19th and Broadway in Oakland and then walked down to the Kaiser Auditorium on Lakeshore Drive. As I emerged to the street and took in the striking serpentine-green I. Magnin building I noted that feeling I always get when I go to Oakland, namely, that what I don’t know about Oakland could sink a battleship. It was 8:15 am on a Saturday morning, rainy, gray and very quiet.
The Kaiser Auditorium building impresses from the outside with its scale. It has a grand modernist persona distinctly distinct from the art deco environment I’d just walked through. A small paper sign on the door invited me to walk around back for Code for Oakland.
On the inside the modernist theme continued with floating escalators that brought me up to the mezzanine level past a black and white photo homage to Henry J. Kaiser that felt like looking at a 60′s era LIFE magazine. I was warmly greeted at the Code for Oakland registration desk, grabbed a muffin, some coffee and sat down to wait for I wasn’t sure what.
The lobby was buzzing with the hum of 100 people or so when, at 9am, we were asked to sit down in the auditorium. Susan Mernit gave a nice welcome and clear overview of the day, then we embarked on what I thought was a brave endeavor: a brief self introduction by everyone in the audience!
My concerns were completely misplaced; the experience was fascinating. The diverse crowd skewed white/male/geek but far, far less than any tech event I have ever been to with lots of women and people of all colors. The professional breakdown seemed to be along a non-profit to tech continuum with a few student and government outliers. As I listened to the different voices explaining their respective reasons for volunteering their Saturday, I wondered what factor was the biggest diversity driver: #gov20 or Oakland?
After introductions we were asked to self select into groups to work on the project ideas that had been collected in earlier community meetings, and during the self-introductions. I was pleased to find some interest in the idea I had suggested to create a simple SMS messaging system for community organizers. After some milling about the groups moved into the lobby and settled down around the tables set up there.
Our team consisted of myself, Ryan Jarvinen (@ryanjarvinen), Lamont Nelson (@thelamontnelson), Alan Palazzolo (@zzolo), and Keith Tivon Gregory (@tivon). Except for the extreme gender bias, we were a reasonable mirror of the diversity in the larger group; as the lone Gen X-er (and just barely squeaking into that cohort!) I significantly raised the average age of our otherwise solidly Gen Y crew. Our overlapping skill sets easily covered experience design, front and back end coding we would need for our project.
As we began brainstorming about the product, which we named ‘ComTxt’ (COMmunity TeXTing), two key ideas emerged. First, a crucial success factor for community organization is the ability to provide the community with notifications (of upcoming meetings, actions, notable successes, etc.). Second, the one technology constant among groups of diverse racial and/or socio-economic status (as might be found in an urban neighborhood or as the parents in a public school) is text messaging. Certainly email is common, but it is by no means ubiquitous, and while phone trees can work, they are inefficient and manually intensive. Essentially everyone, however, has an SMS-capable mobile phone these days. Therefore, a system that enabled an organizer to broadcast text messages to a community of subscribers would be a useful tool for advancing community process.
We transformed these observations into a simple vision: a mailing-list for text messages. With this “good-enough” consensus on the basic product vision, we created some sketchy personas (PTA president, teacher, neighborhood activist) and very simple use cases for each. The PTA president use case was representative: instead of having to collect email addresses at each PTA meeting (and then manually transcribe them), with ComTxt she would be able to display a simple poster that told anyone wishing to receive updates from the PTA to text the name of the school to a particular number.
After a brief break to consume the satisfying and complimentary lunch of sandwiches, chips and cookies, we began to identify the technologies and frameworks we’d use to build our proof of concept. At this point we had about 4 hours left before the team presentations would begin at 4:30pm. I was silently dubious that we could go from zero code to a working prototype in that amount of time. My partners were relatively unconcerned because they understood something I had not yet internalized: we weren’t starting with zero code.
In fact our starting point was the remarkable world of frameworks, APIs and services that is the current web development environment. The programming reality of today is that there is so much pre-existing functionality, documentation and examples available via a web browser that no online project ever starts from zero. My team’s ensuing discussion of exactly which API or service to use was largely over my head, and though I am certain real nuances were being discussed, I think the guys would admit the discussion wasn’t much different than debating which is the best tacqueria in San Francisco. In the end I believe we ended up creating a node.js solution that connects with the Twilio platform, but I may have missed an abbreviation or two.
By this point in the process we had made a seamless and almost unspoken transition from group process to each of us executing as individuals on the tasks to which we were respectively best suited. Ryan gathered and returned to us information about the pros and cons of various text messaging platforms he’d learned from other groups, and he set up the Github repository we’d be using. Alan and Lamont dived into the details of our chosen APIs and began writing the code to implement the simple set of commands ComTxt would understand. Meanwhile, Keith and I collaborated on a prototype web UI for accessing the service, complete with some initial ideas about branding including a logo.
And suddenly it was 4:15 and our prototype didn’t work! Undeterred, we quickly agreed on how to replace our planned demo with a PowerPoint show which I was still finishing even as I moved from our work table into the auditorium to watch the other teams present.
Watching the other presentations I began to feel like I was seeing the beginning of something real and transformative, even if I still couldn’t exactly articulate what it was. Gov 2.0 may not yet be truly changing lives, but I am certain it will and soon. Last Saturday I saw one good, and useful, idea after another presented and in most cases actually working with a degree of polish and functionality that was remarkable given the few hours the teams had had to create them. I got an emotional thrill from the energy and enthusiasm for positive change that had coalesced into these fascinating projects.
Literally moments before our turn to present, Alan, grin on his face, leaned over and told me the system was working! It turns out as I had been watching the other presentations spellbound, he, Lamont and Ryan had continued coding and had solved the blocking issue. I rapidly added a telephone number to our title slide and we all went up to present. The audience understand our simple idea right away and shared our smiles when, after texting a subscription request to ComTxt, our phones buzzed in unison with a message from Alan:
Code for Oakland is great!
Sent from ComTxt
The paradox of web accessibility is that learning how to achieve it is not very accessible!
The problem is figuring out where to start. While there are a number of obviously relevant standards and examples available online, it was hard for me, as an accessibility novice, to sort through these guidelines to help our development team construct a set of concrete tasks that would return the greatest accessibility improvement for the least effort.
As it turned out, the thing I needed in order to understand how to prioritize our efforts, was to spend a day and a half sharing our upcoming release of JasperServer with a customer and that customer’s accessibility consultant. The results of this experience were both humbling and encouraging. The humbling part was the discovery that in its current state our brand new interface framework was not very accessible. The encouraging part was that with just a few hours work, once I knew what to do, I was able to use the systematic nature of our new system to make significant accessibility improvements.
The key to all this, of course, was the opportunity to work with experts in a real-world setting, and to be able to make changes and test them in real-time. While there will be no substitute for this experience, I’ve distilled my learnings into the following list, which I hope could be helpful to any web designer trying to understand how to begin improving the accessibility of his application.
Comply with Keyboard Standards
Users of screen reader software do not ever use a mouse for two reasons. First, they drive the screen reader software through keyboard commands so leaving the keyboard is awkward. Second, the rapid movement of the mouse tends to overwhelm the screen reader software which cannot keep up with the rapid change in input focus. While JAWS (a popular commercial screen reader) does have a ‘virtual’ mouse that permits a user to simulate a mouse click via the keyboard when nothing else will work, this cannot be relied upon because it is not part of any general standard. As a result, in order to be accessible, all required user events must have keyboard equivalents. In addition, these keyboard equivalents should meet the standard expectation (e.g. return key follows a hyperlink) to make them most useful and intuitive.
The key point here is that by using the screen reader experience as the baseline design context, we will also achieve accessibility for the larger community of users who are sighted but must, or prefer to, use a keyboard and not a mouse.
The ARIA standard (Accessible Rich Internet Applications) is being widely adopted by the accessibility community. We tested ARIA markup with two screen readers, JAWS and NVDA (an open source screen reader) and found that both were well along in supporting the ARIA standard by providing appropriate context-specific instructions when encountering ARIA markup.
In general, adding ARIA markup is very low risk as it takes the form of element attributes that have no direct effect on rendering or behavior. Some of the attributes—particularly those targeted at improving orientation (ARIA ‘landmarks’)—improve accessibility instantly. Other attributes, such as those associated with what ARIA terms as ‘widgets’ can’t be added until supporting interactive scripting is also added because these attributes cause the screen reader to announce behaviors that must be supported with custom scripting.
Adding heading tags to the markup was a simple and effective method for improving a screen reader user’s ability to navigate pages. We also learned that it was not a problem for the headings and the ARIA landmarks to provide essentially redundant information. Screen reader users have the ability to navigate by heading or by landmark, often switch between the approaches depending upon what appears to be working best and don’t have a problem sorting out any redundancy.
Provide Non-Visual Feedback
It is common now in web applications for a user event to trigger an update to only part of a page. While this change is generally obvious to a sighted user, it is completely hidden from a blind user. There are ARIA standards for dealing with this exact issue by triggering alerts that will be spoken by screen readers. These attributes must be added to any scripting that dynamically updates pages, or generates error messages and alerts.
Web applications can be written to assign HREFs to anchor tags dynamically. Unfortunately, anchor tags without HREF attributes are not recognized by screen readers. This limitation can be addressed by adding empty or dummy HREF attributes to anchor tags but the implementation must be tested in all target browsers as there is inconsistency in how browsers treat the placeholder HREF attributes.
Develop Internal Best Practices for Accessibility
One cannot create an accessible application overnight. It will happen over time as long as an organization has a development culture in which accessibility is given priority. This can be helped along with simple tactical steps such as ‘Accessibility Checklist’ for developers and more strategic ones such as requiring that QA personnel, designers and developers build up a comfort level with using screen readers for testing prototypes and production code. In order for this to happen along with other priorities the best approach will be to establish that accessibility is neither an afterthought nor a special case, but part of creating semantically sound markup that benefits all users.
Work with an Accessibility Consultant
To achieve more than perfunctory accessibility compliance it is crucial to develop an ongoing relationship with an Accessibility consultant. There are several reasons for this. First, building a culture where accessibility is a core value requires that development personnel meet and observe individuals who rely on assistive technologies. Second, while QA tests can be created to validate standards compliance, observing real disabled users is the only way to know if an application has achieved real world accessibility. Third, as standards and assistive technologies are still in a significant state of flux, any organization, but particularly one where the understanding of how to implement accessibility is immature, will benefit from advice and guidance from an expert source.
The reality of accessibility is that it is no different from usability or simplicity or any other system characteristic: it can only be achieved by making it an ongoing and central priority.
While this might sound as if accessibility will then compete with other priorities, in fact improving accessibility helps to advance the quality of the user experiences for all client applications. In essence, accessibility is about delivering markup to assistive technologies that is appropriate for those technologies. Seen in this light, there is little difference between designing for accessibility and designing for mobile or any other experience. In all cases what needs to happen is that the application server delivers to each interface client code appropriate to its particular capabilities. As there is no doubt that support for a diversity of devices is the future for all software applications, all that needs to be done to improve accessibility compliance is to always consider assistive technologies in the mix of supported devices.