Collaboration with development is a handshake, not a handoff

We recently spent 14 months designing an investment platform for traders and portfolio managers. As you might imagine, this was a large and complex application that required a tremendous amount of collaboration with the client. Our team consisted of a design communicator, an interaction designer, two visual designers and an engagement lead. We spent many hours with subject matter experts (SMEs), business analysts (BAs) and developers, crafting a solution that satisfied the needs and goals of seven user personas (and because their real-world counterparts were also employees of our client, actual users reviewed our work every step of the way). This article describes some of the key techniques we employed to ensure that the interaction design was something our client's development team could implement (I'll focus on our collaboration about interaction design; visual design was a key component of this highly visual interface, and deserves its own article).

Start with a good plan

An important aspect of the engagement was the creation of the original project schedule. The first phase of the engagement was slated for nine months. In order to effectively deliver such a large volume of design, we tackled each new topic in three-week iterations. This decision played a critical role in the success of the project because it made the design process more manageable and continually put the latest design in front of users. Our initial iterations covered the largest interface interactions for the seven personas. To borrow a quote from Steven Covey, "We put the big rocks in first," and then proceeded to work on smaller, more constrained problems. Each new iteration further tested our previous assumptions and design decisions. This kind of stress test ensured that our design framework was indeed solid.

Designing in three-week iterations

Three weeks proved to be an effective time frame for a focused design effort. Everyone involved could easily wrap their heads (and schedules) around that much design while staying focused on the problems at hand. We also found that you can't get too far off course in three weeks, and you have enough time to solicit feedback and refine design topics. Because we repeated this cycle nine times with two additional wrap-up iterations, everyone became familiar with the rhythm of the schedule and could plan their lives accordingly.

We found the following schedule worked extremely well for each three-week iteration:

 

Days 1-5: Kickoff and first design pass. We began the first week with a kickoff meeting. We presented the topics we would be working on to the product team and talked with subject matter experts, business analysts, and key developers. We'd spend about 90 minutes discussing what we wanted to accomplish and making sure we had the resources necessary to solve the design problems. The meetings would usually end with a discussion with the SMEs and a schedule for meeting with them again over the next day or two.

We spent the other days in our own design meetings back at the design studio. Each meeting ran anywhere from four to six hours, after which we would document the design and draw the screen images. Our schedule was also punctuated with SME meetings and less formal calls to business analysts when we needed deeper domain knowledge.

Days 6-10: First check-in and iteration. The first design check-in with the product team occurred early in the week. We presented our design

ideas—some rough and some polished—that served as our proof of concept for the design topic. We'd take the entire client team through one or two scenarios that exercised the current topics and perhaps discuss a few ancillary anatomy screens to provide context. This meeting gave us an opportunity to dig deeper and validate our assumptions. It also provided a forum for asking questions of and having discussions with the group at large that further informed the next chunk of design.

Days 11-14: Second check-in and final iteration. At the second check-in, we presented a much more polished design. We would quickly run through any new materials and then march through the scenarios. More developers were invited to this meeting so they could ask technical questions and hear feedback directly from the users. Since we had already vetted the design from a user- and goal-directed standpoint, this is where developers would really weigh in from a technical and architectural perspective. They provided feedback and insight into the challenges they would face when building the design and offered suggestions for how we could help them be more efficient (we eventually wound up conducting daily meetings with the developers—more on this later).

Day 15: Final delivery. The final deliverable for the iteration included an updated draft of the design document as a PDF, and any new image files. We published these directly to the development team and stored the files in their source-control system for future reference. As the document grew larger, it became increasingly important to identify which sections had been updated so the developers knew where to look for changes.

By rapidly iterating all aspects of the design documentation and delivering new drafts regularly, the developers were able to respond to the changing design, and were able to constantly evaluate it. This consistent visibility helped gain their trust (we weren't "hiding" design), and ultimately made for a stronger design because of the constant stress tests the developers put the design through. Throughout this collaboration process, the questions revolved equally around the implementation of the visual design and the interaction design. Our team-based approach to design allowed us to focus on isolated questions or to meet and respond as a team when appropriate.

Schedule daily check-ins

As the project progressed from research to design framework to refine-level design, we became acutely aware of how important it was for us to collaborate even more closely with the GUI development team in addition to the larger-team check-ins we conducted during the three-week design iterations. As we began to work with the developers on a more frequent basis, we discovered a powerful synergy that had many beneficial effects. After consuming several drafts of our design documentation, the developers started coding and had many questions because the design was still evolving. Midway through the project, we started to conduct daily check-ins to accommodate their questions. This structure of having daily scheduled "office hours" made our collaboration much smoother and helped us solve design problems more quickly.

For our daily check-ins, we typically met with the same two developers and roped in other engineers when we needed their expertise or if they had their own questions. This open access to the engineering staff was a cornerstone of the successful collaboration effort. We were always able to get the answers we needed, usually within 24 hours. Instead of being distracted by a string of emails throughout the day, each developer saved their questions and problems for office hours. This allowed us to keep momentum during our own design sessions, instead of having to duck out to answer one-off questions.

The check-ins worked as a two-way exchange. The engineers showed off their most recent efforts (which consistently delighted us), while we answered their questions and briefed them on our latest design decisions. We got very good at fielding a barrage of small questions or wrestling a big one to the ground. On a good day, we already had the answers and could point the developers to the appropriate section in the design document. At times, we had simply not published the documented answer yet and it was forthcoming at the next document handoff. On a not so good day we were left scratching our heads, and our only response was, "We'll get back to you on that." It was great to get this kind of feedback early in the design cycle rather than later.

Debrief after check-ins

When we had our check-ins, the team conducted a quick debrief session to compare notes and make sure we had captured what we needed from the conversations with the developers. Because the engineers' questions might pertain to any part of the interface depending on what they were working on, we needed to balance their concerns against the topics we were working on for the current iteration.

Follow-up on open issues

Because we were collaborating almost every day, we had to constantly revise our issues list. When the developers had pressing open questions, we made it a priority to get them answers as quickly as possible (usually by the next day). We often sent design documentation updates via email while we revised the full document that would be delivered at the end of the iteration. This "closing the circle" earned us a great deal of credibility with the engineers who learned to trust our solutions and value our responsiveness.

When we began producing graphic assets, our visual designers constantly updated the library of assets and delivered them to the development team. It was thrilling to see the nascent design materializing in code, usually at a pixel-perfect level. When there were problems with an image, our visual designers were right there to make changes and deliver updated production assets. This early collaboration allowed us to agree on a graphic asset naming convention that worked for the developers, which in turn made it possible for us to turn around rapid changes to elements and then drop them straight into the developers' source repository.

As designers, we have the opportunity to provide an immense amount of value as the design moves through the development process. This process is best when it's less of a handoff and more of a handshake; it's a commitment between the designers and developers. Trust is a key component of this relationship, and once developers learned to trust our design decisions—and realized that we were really listening to their feedback about technical feasibility—it allowed them to focus on writing code and not second-guessing our design choices.

Likewise, our design team learned to trust the developers' feedback about the impact of the design on technical and resource limitations; it was clear that they wanted to build the best solution, too. Our responsiveness to their feedback taught them that we "had their backs," both in terms of providing design that accounted for technical constraints and in truly understanding the needs and goals of the users. This was reinforced in our larger meetings with analysts and subject matter experts who approved our design decisions from the context of business goals and their understanding of the domain. The end result is a product that looks, feels, and behaves as designed and is on track to satisfy user, business, and technical needs.