Experience Report

Partnering to Improve Usability

About this Publication

After many years of working as a product owner, I embarked on my first project partnering with a user experience team. Throughout this project I’ve begun to understand the value from user testing during development and the extent of the bias in my own design work.

1.  INTRODUCTION

Our software team at IHS Markit set out to improve the usability of our engineering applications with the assistance of an in house interaction design group. Initially, we were skeptical of their lengthy design cycles and worried it would clash with our iterative development process. We experimented with developing our own fast prototypes while waiting on the design team and we were convinced we could proceed without their expertise.

Ultimately, we learned a humbling lesson: As subject matter experts we have an instinctive bias that we were injecting into our designs and influencing user test sessions. We realized the value of the design team’s expertise and that by partnering with others to design our software we can achieve better UX outcomes.

2. Background

Fekete started as an engineering consulting group that eventually expanded to design commercial software. I started in 2007 as a Petroleum Engineer and worked in the consulting group for 3 years. Throughout this time, I learned a number of our engineering applications in service of reservoir engineering and reserves evaluation. I then transitioned into engineering support for our applications and became more involved with the software development group. After 2 more years in this capacity and a maternity leave, I moved completely into the development team as a product owner. As a result, I’ve learned software development more from the user requirements perspective without any formal training. In our group of 5 developer teams, most of the product owners started their role the same way.

When planning our engineering applications, the product owners (mostly other engineers) never thought about how easy or intuitive our software was. We were only thinking about whether or not the software solved the underlying engineering problems that the users would face. The differentiator in our software was its ability to solve advanced engineering problems. Our single competitor built more complex numerical simulators, and we were the only company that built middle of the road engineering solutions that negated the need for intensive calculations. We looked at it from this perspective: If you have the expertise to work with these complex engineering methods, then you’re smart enough to figure out how to use the application. Usability wasn’t our primary concern. Our aim was to meet the functional needs of engineers.

3. Starting in user experience

3.1 Creation of Harmony Product

Within my first year in the consulting group, the Harmony application was conceived to expand beyond our niche products to a larger market. This product targeted a different user group than the existing Fekete suite. Its focus was on simpler “everyday” analysis methods centered around a heavily used curve fitting method for production forecasting (decline curve analysis). We would be breaking into a market with many well-established competitors. Simply solving the engineering problem was no longer a sufficient differentiator. We needed a compelling reason for users to transition from long-standing products that were very familiar.

What we thought about as usability was how well the different analysis methods interacted and connected. We wanted to move from checkboxes on a list to a fluid process for the workflow of the engineer. We came up with clever ideas that solved problems we saw with existing engineering applications. There was an intense focus on training and demonstration in order to sell Harmony as “easy to use.” While successful, it was treated more as an add-on rather than a replacement in our users’ common processes.

3.2  New Focus on Usability

In 2013, Harmony had become an established product primarily within the existing user base of our older applications. While Fekete was working on redeveloping Harmony into a multi-user, enterprise grade application, the company was acquired by IHS Inc. This introduced a new head of product management to our team. Beyond the concerns of developing a multi-user application, a new focus on usability was introduced and became a higher priority. While we often solicited user feedback, our focus was on the addition of features and it was uncommon to revise existing functionality when users struggled. With the new leadership we were pushed towards an “Apple mindset” that focused on intuitive applications which would require minimal training to get started.

Initially there was some resistance – generally we had received positive impressions from users and still held much of the existing application was good enough. We believed we’d created something powerful and users simply needed to learn the application’s full capabilities.

At this point we hired an interactions designer within the Harmony team. We were under the impression that she could come up with low cost usability improvements that could be incrementally worked in. She was told to completely rethink the application. This difference in expectations didn’t set the relationship off to a good start. The result was very little change in interaction style – the redesign was too costly and often there wasn’t a cheap alternative so we continued along our usual process. Eventually the designer left the company.

3.3 The Common Component Initiative

When once again a new leader for product development came in, he had a different approach to improving usability for the products in his portfolio. IHS had purchased many companies through acquisitions and there was a significant disparity in look and feel across these products.

The goal of this strategy was to better cross sell products. If the interactions were common amongst IHS products and applications connected smoothly, users would be compelled to remain within our suite for most or all of their workflow. Any consumer using one IHS application would have a head start at learning other products, making them appear even easier to use.

The first part of this initiative was decline curve analysis – a very common petroleum engineering tool. Different versions of this analysis method existed in seven applications with others in the portfolio wanting to develop their own. Throughout the different applications this analysis was intended for a wide range of uses from automatically applying analysis to a large number of wells to spending longer periods manually adjusting an analysis well by well. Decline curve analysis was a core component of the Harmony application, but to many it was a convenient add-on that received little ongoing development. In these applications, decline enhancements and usability improvements remained a low priority that was unlikely to be addressed.

It was decided that, due to our existing experience, the Harmony team would build a decline component to be integrated initially by three geoscience applications in the portfolio.

3.4 The UX Team

At the start of 2016, to kick-off the component initiative we were introduced to the UX team at IHS – specifically the group serving the Energy portfolio. They are composed of graphic designers, interaction designers and a usability researcher. Their purpose at the company is to provide expertise in user interaction for all development teams within the Energy portfolio.

In selling their successes, we were told the story of another IHS development team who had worked with the UX team. During prototyping, the designer, product owner, and developer would each propose a solution and 9 times out of 10, the users would pick the option created by the designer. It was clear that the UX team was going to be a valuable resource for us. We had a lot of confidence at this point in their ability to build very intuitive tools that could be picked up out of the box and could be used without any need to teach the users.

Following the introduction to this team, we began working with the usability researcher on the team to conduct interviews with external clients of Harmony (4 or 5 people) to try to collect information on decline curve analysis. Prior to this project, the UX team had never worked with engineering analysis.

From the user interviews, our product manager created a quick illustration to show all the required content and the UX team began design work. While giving a few weeks lead time for design, the developers began investigating different plotting technologies and how to handle creating a UI component that could be integrated into applications with very different technologies.

We had some initial frustrations when working with the UX team because of the lack of familiarity with the subject matter and the difficulty in working on design when it felt like we were speaking different languages. After a few design ideas were proposed that just seemed so alien to the product manager and myself, we began feeling skeptical that their designs would be well received by users. A few months had already passed in the project and the progress we’d made was receiving scrutiny from directors that had expected us to be further along. This external pressure to accelerate and our own misgivings about the design work lead us to revert back to our usual process and starting coming up with our own designs in order to kick off development.

The UX Team continued going through their design process, however we had become completely disconnected on the development side. The UX team preferred to work from their own backlog, completely separate from the stories we had prioritized for our developers. We continued to give them feedback on the designs they were coming up with, but the order in which they approached the work was nearly inverted to its importance from a user perspective.

This clearly wasn’t ideal, so the manager of the UX team had a frank discussion with us to get the relationship working. We started to more assertively voice our concerns on the priority of upcoming work and put emphasis on a user interaction that was of high priority (the way the user manipulates the analysis line on the graph). Up until then, the UX team had not even started working on this. This was something very foreign and complex to them and they were deferring this work, but for us this was a critical piece of user interaction with the analysis.

3.5 Introduction to New Concepts

At this time I had been reading about Hypothesis Driven Design [O’Reilly] and Lean UX [Gothelf]. The core things I took away from these were not designing everything up front, and soliciting frequent feedback from users or sufficient proxies of users.

Prior to this reading, I had not discussed any plans with the UX team on usability testing. I wasn’t sure whether the user researcher was experienced in the types of testing I was discovering. I began approaching the group from our user interviews and planning to conduct usability testing with them. While I approached it independently, the researcher and designer were informed and invited to observe our users.

3.6 Decline Component – Iteration 1

As the line manipulation was a primary concern for usability, this is where a lot of our initial user testing effort was focused. The initial manipulation requirements for the analysis were drawn from the capabilities that existed in Harmony – translating the line horizontally and vertically, rotating, changing the curvature, and moving a reference point without altering the shape. The first iteration of our line manipulation was based on image manipulation in Microsoft Office, in which we looked for similar visual cues. We took the design, and attached to it our understanding of how the equation changed given different manipulations. The outline box defined the extents of the analysis and allowed the ends of the line to be moved vertically or horizontally. The area inside the extents allowed for free moving translation. A circular arrow at a pivot point allowed for rotation of the line, while the pivot point could be moved along the line. And an arrow perpendicular to the line allowed changing of the curvature. The idea here was that transplanting a widely understood interaction into our engineering tool would make it usable to a wider audience.

Figure 1. Initial Decline Component Design

This implementation was then tested with our user group. Since each of them had some familiarity with decline curve analysis, we gave them free reign to try anything they liked in the test application. While they did need some adjustment time and often had some questions on the behavior, the reception seemed quite positive and we felt confident that these users were comfortable interacting with the decline component.

3.7  The Next Few Iterations

Each iteration we made small adjustments based on the key areas of difficulty that we observed in their testing. We kept the core concept of the design, but we made small adjustments in the way the users would interact with it to reduce the amount of surprise they had.

Figure 2. Changes to Line Manipulations

We believed this was going very well. We became more comfortable with the UX Team. They became more comfortable with our expectations and what we considered to be valuable. We were moving towards a pretty successful design.

3.8 My Moment of Arrogance

Throughout this project, my manager had been very hands off and had not spent time interacting with our application. As he was the original designer of our legacy decline analysis, I was eager to hear his feedback and certain he would be impressed with the intuitiveness of the design.

When he was able to participate in testing, we had gone through several iterations on the original design using the feedback from our testing. What I received instead of congratulations was a long list of questions and concerns. He couldn’t understand how we had designed something that was so confusing when we had been soliciting continual feedback.

Instead of asking questions, I gave justifications. I cited the notes I had taken on feedback to demonstrate why the interactions had been implemented that way. I questioned whether his role in the original decline module was biasing what he expected us to build. He still appeared concerned about the end product but accepted that the user feedback was driving our decisions. And I began to question whether the test outcomes were really leading us to an improved design.

3.9 Where We Ended Up

With this growing concern in mind, I set up another test session in early December of 2016 with the same internal users that had been in this test app several times before. They had been away from it for a month while we worked on some additional capabilities in the app. However in this round of testing, I gave our user researcher the opportunity to take the lead. Prior to this session, I had been leading the user testing. It was more unstructured since most of our participants were familiar with the use and intention of decline analysis. The user researcher would view the sessions remotely and participate in our follow up discussions, and from there I would decide with the developers what changes to make in the application.

This session was set up in his typical style: involving predefined tasks, a difficulty rating, and scripted instructions that would be the same for all participants. While testing new features we also included tasks on the usability of the line manipulations. The result came as a shock – our test users couldn’t figure out how it worked. These were people who had been heavily involved in repeated testing and they couldn’t remember it. This was a very frustrating moment for us. We were nearly 6 months into this project and had gone through several iterations, and now we were thinking we would have to tear everything apart and start over.

This got me thinking: How did we get here? How can we be soliciting frequent feedback and still building the wrong thing? Could the simple change in test style have made such a big impact on the results we were seeing?

When I started thinking back to earlier sessions, I began questioning the impact of so many small behaviors and choices. At the outset of the sessions, the researcher would tell users that, while they were free to ask questions, he may not always answer or his answers may sound vague. When testers would ask questions or for help in performing the tasks, the researcher would remain silent or ask them to continue on their own. I realized this was a stark contrast to the behavior I had been exhibiting. I would instinctively try to assist their tasks. At the time it felt natural – I had a strong background in support and training, and the tasks they undertook were independently invented so it didn’t feel like “cheating.” I was so optimistic about the work we were undertaking it was hard not to revert into “selling” users on our application.

Another struggle was in trying not to get too sidetracked by encounters with defects. Because we initially had testers work independently, they often wound up in areas with known defects or stressed it further than the preliminary code could handle. The result was more confusion or slowdowns in testing and often what prompted me to start explaining what they could do – not only to avoid the known deficiencies but I would extend into helping them resume the work (often more efficiently than they had started).

I recently went back and reviewed the feedback we got from our original testing session. What I see now is that our testers were overwhelmed and exposed to far more functionality than they truly needed. The way I interpreted it initially, our idea was innovative. Because existing applications were so simplistic, I thought users had become accustomed to limitations and simply needed to adjust to the increased capabilities. We had offered far too many features and it was very information dense but I don’t believe users instinctively turn down features or ask for less. My bias wasn’t letting me hear this and my intervention during testing didn’t give the participants much opportunity to say it.

3.10  Our Last Iteration

Once I was starting to move past my biases and really look at what the test observations were showing, the design discussions really started to change. We identified which functionality users actually worked with and made valuable use of once they figured out how to use the tool and removed the ones they didn’t. A number of things I had believed would be essential were no more than “neat” distractions to our testers and often went ignored unless we specifically requested them to be used.

Another surprising side benefit was a reduction in the whack-a-mole bug fixing that had been an ongoing problem. Specifically in the line manipulation on the graph, the interaction between the immense number of manipulations and the underlying equations and constraints has become very complex to maintain. Each time we would adjust the manipulations while attempting to improve the experience, the result was often breaking or substantially altering other manipulations or conflicting with constraints in the equation.

This significantly simplified the tool and our final testing sessions have shown substantial improvement in the how quickly users were able to grasp how to work in the application, showing it is much more intuitive.

Figure 3. Excerpt from Usability Testing Results

There was a great deal of lost time to re-work but the end product was something the designers, developers, and product management have been very happy with.

4. Looking Forward

At the outset of this project I felt I had a clear understanding of how to solicit and use feedback from usability testing. It all felt very common sense and we thought the project would be very straightforward. But I have learned a number of lessons on design work and user experience:

  • My familiarity with the subject matter can be helpful but also strongly predisposes me to biases on designing for myself as the user.
  • How usability testing is conducted has a tremendous impact on the feedback and outcomes.
  • Answering specific questions during software testing gives a misleading view on the ease of use.
  • Working with an unfamiliar design group who I felt was unfamiliar with the technical space had a greater impact on my initial trust in their work than I would have anticipated.

Overall this was a very challenging project, but ultimately changing my mindset helped us achieve our objective. While coming to understand design tools and processes better through working with our UX team, I have gained a much greater value of the contribution of their work and also developed an interest in this field.

I have continued to work with the UX team on new features in our primary application (Harmony), and the lessons I’ve learned from the decline component have fundamentally shifted the way I work with this team. The main changes are:

  • User interviewing and research is done jointly by the SMEs and the UX Team, leveraging the expertise of both teams, while maintaining better cross team communication and understanding.
  • Rather than just tasking them to present the test outcomes, processing the results starts with a discussion so the user researcher and designers can better understand some of the technical content and its implications from a subject-matter perspective.
  • These discussions have also allowed me to weigh in on requests that are outside the scope of a feature but caught the designer’s
  • Before we talk about design we’ve developed a clear understanding of the user value priorities so that the UX team is starting on the highest value capabilities and avoid investing in requests that have been deferred or are viewed as low impact by product management.
  • The value gained from testing sessions and their resulting design changes has lead us to set up monthly usability testing for both new features and existing functionality in the Harmony application. The first session in May allowed us to find simple changes to a new feature that were adjusted just in time for the application release. The feedback also gave more confidence to our developers than we usually have right before a release goes out.
  • The biggest impact was that, following the process changes made from this project, our company hired a local designer to work with the UX team but become more closely integrated in the engineering technical space. Within his first month of work in our office, all five developer teams have made use of him either in new design or providing feedback on work in progress.

5. Acknowledgements

I would like to thank the very patient designer, Vidu, and researcher, Sebastian, for their participation in this project and helping me understand the value in the usability design space, and to my team of developers who humored us in this experiment and continued to put forward their best work. Thank you to my husband, Chris Edwards, for pushing me to write this experience report when I really didn’t feel there was much to share with the Agile community. And a huge thank you to my shepherd, Rebecca Wirfs-Brock, for the enormous amount of time given to polishing a very rough idea into something I can feel immensely proud of.

REFERENCES

Gothelf, Jeff, Lean UX, O’Reilly Media, 2013.

O’Reilly, Barry, How to Implement Hypothesis-Driven Development, https://barryoreilly.com/2013/10/21/how-to-implement-hypothesis-driven-development/

Author’s address: Krystina Edwards, Suite 800 112 4th Ave SW Calgary AB T2P 0H3; email: [email protected]

Copyright 2017 is held by the author.

Add to Bookmarks Remove Bookmark
Add to Bookmarks Remove from Bookmarks
Add to Bookmarks Remove from Bookmarks
Agile2017, Experience Report

Your Bookmarks

No favorites to display. You must have cookies enabled to add bookmarks.

Have a comment? Join the conversation

Related Agile Experience Reports

Discover the many benefits of membership

Your membership enables Agile Alliance to offer a wealth of first-rate resources, present renowned international events, support global community groups, and more — all geared toward helping Agile practitioners reach their full potential and deliver innovative, Agile solutions.

Not yet a member? Sign up now