What I learned from building a data product in a crisis

I work at Citizens Advice. The Covid-19 pandemic has had a dramatic impact across our services and seen an incredible response from staff in the organisation, for example

  • Unprecedented demand for our website content
  • Creation of new, trusted expert advice content at speed
  • Stopping the provision of advice in person at the 250+ independent local Citizens Advice locations due to lockdown measures
  • A resulting shift to providing advice through our other channels, such as telephone
  • A pronounced change in the patterns of issues that our clients are coming to us with

    I lead the Data team. I work closely with my colleague Tom, who leads the Impact team. Broadly speaking, my team is responsible for making data available, and Tom's team are responsible for asking questions of it.

    On 19 March Tom and I were asked to draw together a wide variety of operational data into a single place, to help management and leadership navigate the crisis. It would include activity and demand data for various channels, and data on client demographics and breakdown by issue. It would also include a summary for a quick, headline view.

    Citizens Advice colleagues have spoken about our data and what it is telling us on social media and in the news in the past couple of months. Rather than talking about our data myself, in this post I wanted to reflect on the process and experience of "making the thing", rather than what's in that thing or what it means.

    It's been a really rewarding experience and I have learned lessons from it that I thought would be worth sharing.

    Get something out there

    It was crisis time, and we received an understandably loose brief as a result. We brought together a group from both of our teams, and came up with a first iteration in 4 days.

    We made a spreadsheet. Spreadsheets are great.

    It is a spreadsheet that's intended to be read by humans, rather than a spreadsheet that's a data source. We collectively agreed to making a spreadsheet, having been given a steer not to build something in the proprietary data analysis tool that's widely used at Citizens Advice. Initially we thought it could be a slide deck, and I had a strong view it should be in Excel, but the consensus was to go with Google sheets. G-Suite is what we use at Citizens Advice, and sheets has the advantage of being easy to share and less of a daily overhead to maintain than a slide deck.

    Ideally we would have had a better understanding of user needs, and some clearer questions to be asked of the data. Regardless, our first version was well received, and put us in a position to improve it regularly. Me aside, the team we put together has a really good understanding of how Citizens Advice works, and I think this helped with our initial and continued success.

    Ask for expert input

    We have a wide range of expertise at Citizens Advice, including a really strong digital team in the form of Customer Journey. I was able to ask for input from a designer and content expert before we circulated the first version of the report. This really improved the product, helping it to be clearer and easier to navigate. Later, when we were trying to understand how the report was being used, I had input from a user researcher on survey questions.

    Even if you aren't working in a 'model' multidisciplinary team, it doesn't mean you shouldn't take a multidisciplinary approach. And if you don't have this kind of expertise to hand, just asking somebody for an objective view and opening yourself up to constructive criticism is always good.

    Work to understand your users

    Again, it wasn't ideal to get into this after the fact. But it's essential to try, and I thought what we ended up doing was neat and pragmatic.

    We were able to get a log of all the staff who had accessed the report, and when. From this, we were able to build a picture of when the report was being used - early in the morning, justifying the demand for an early update of the data. It also gave us a list of people to survey.

    I wrote a short survey in Google forms and around 40% of our users responded. I was most interested in whether the report had resulted in people taking decisions, or doing things differently. For me, this is the benchmark for all reporting - if you're not taking decisions as a result, why go to the expense of doing it?

    So seeing this pie chart [1] showing that over 70% of people had taken decisions or actions as a result of this work was really gratifying:


    The follow up question was detail about what those decisions were. This gave us an understanding of the breadth of decisions, the variety of our users, and the relevance of the report across multiple parts of our organisation.

    The next thing I was most interested in was whether users could understand what was in the report. I think the barrier to entry for data products needs to be as low as possible, and that data focused teams can tend to take understanding for granted. This pie chart indicates that the report was pretty legible, but that further work could be done:

    Being able to do these two humble tests meant a great deal to me.

    Keep iterating

    The report has been constantly changed over the past couple of months, in response to new requests, changing circumstances, and a team attitude of continuous improvement. It's been good to be part of a group where it's understood from the outset that the thing will never be complete.

    I think this has been supported by the team practice, which is curious and conversational. Speaking of which...

    Have regular discussion

    I think the team practice has been the most valuable thing to emerge from this work. We have a weekly discussion about what the data in the report is telling us, facilitated by Tom. This encourages ownership and recognises expertise, because the people closest to the data are driving the conversation. Some really valuable lines of thought and investigation have come out of these meetings. We thought it was so good that we started inviting guests, and they've found it valuable too [2].

    We separated out the mechanics of developing the product from the discussion of the data. We have had a fortnightly meeting to do that, led by me. That's worked well, I think in particular because the team have a high degree of autonomy, with individuals trusted to work on their own parts of the report with some light oversight from the most experienced team members.

    Take things out and do less

    The first stage of the crisis called for daily updates. This is unsustainable over time, and developing the report helped us to understand the various states of the data we have. Some data is easy to automate, whereas some requires a large amount of manual intervention and also changes shape regularly, making it labour intensive to report on. This has been a helpful secondary outcome of the work, because it can help inform where we put effort to improve our underlying systems and practices.

    Not everything we've done was useful, or used. So we've taken things out. In future, I will work to understand what's being used in a more methodical way. I missed an opportunity to ask a question in the first survey - "which tabs are you using?" and a pick list. We also tried to track usage using Google analytics, but it was unsatisfactory.

    Due to the iterative nature of the work and the regular discussion of patterns, it also became clear to the team that the time periods for significant changes in the data were longer than daily. If we hadn't kept developing and discussing, we might have been saddled with a daily reporting burden for longer. This also gives us a sense of the various cadences of the different operational data we have at Citizens Advice. Not everything behaves like the instantaneous spike we'd expect to see on our website after a government announcement, for example. Our service is broad and varied, and to my mind that variety helps us to meet the complex needs of our clients.

    Help people to understand the questions they need to ask

    I think one of the most important areas of work for data maturity in public service is to help people to formulate the questions they want to ask of the data. It helps to focus the discussion and build of any data product. Generally speaking, "give me all the data" isn't the best starting point and there's no intrinsic good in having large amounts of data, hoping that some incredible truth will emerge from it.

    In my experience over the past couple of months these questions have increasingly started coming from colleagues, which is great to see. A polite challenge of "what questions do you want to ask?" has been useful.

    You don't need to serve everybody with one thing

    I think we've resisted putting ever more measures in our report just because it's the first time so much of our data has been together in one place. The survey showed us that our users have different needs, and we recognised these might be better met by other products in future.

    I said earlier about the thing never being complete, but that isn't the same as it not being finished - 'finished' as in set aside in order to move on to the next thing, taking what you learned with you.


    Footnotes

    [1] Better charts are available
    [2] This is what they tell us at least
    views