- MY ROLE: UX Designer, Infographic Designer, User Researcher
- PROJECT GOAL: To provide an easy reference for the team and corporate stakeholders to understand the testing process we are implementing.
- PROJECT OUTCOME: The graphic was used by the product owners at IBM to help articulate the need for robust user testing
My team has recently been really working on our analytics strategy. We even brought in a dedicated analytics expert to help us do it right. (I am thrilled about this and am constantly picking her brain). She has been very busy right off the bat and one of the things the corporate stakeholders are very excited about is improving our experience using A/B testing. So much so that they have been pushing us to start ASAP.
This led to our analytics rockstar and I having a great discussion about when and how to test what in the best way. The first thing we did was look at the tools at our disposal which include a robust analytics dashboard, heatmapping, survey data, A/B testing, and a group of users willing to be our guinea pigs(!!!). We wanted to somehow communicate that although we are excited to begin A/B testing things, we have to back up a little in order to maximize the tests.
The initial sketch of the process
The sketch was then refined to include a “spectrum” of testing techniques that went from qualitative to quantitative.
The next thing that she did was to create a list of Key Performance Indicators (KPI’s) that we want to measure against (and hopefully improve). In order to do this, we need to gather up all the quantitative analytics and survey data and the qualitative user interviews and survey feedback in order to figure out how the KPI’s had been performing. We focussed on where the worst pain points were. That is where we want to start the testing.
In order to get a good data-set from the tests, we have to leave them up for a bit and watch what the users do with analytics and running heatmapping on the pages we are testing for more data. Ideally, we would compliment this with some in-person usability tests where we can actually watch the users try out our A’s and B’s to get a little more insight into how they are performing.
Then the final step would be to update the pages we tested to make the winner of the A/B test the final design. We would then pay attention to the analytics that measured the KPI’s in a longer time-frame to make sure that the changes we made were truly effective.
Here is the updated graphic that illustrates the process from a high level.
