Today organizations are spending a huge amount of money and time on building dashboards. “If only we get a dazzling dashboard, we’ll finally become data-driven,” they promise — but this couldn’t be farther from the truth. In reality, only 45% of people with access to BI tools actually report using them, according to a 2017 report by Logix. The most frequent challenge they cite — the dashboards are just too difficult to use.

Different people make decisions differently. If you want to build dashboards to help them make better decisions, you first have to understand how and why they make those decisions. You could sit on a treasure trove of data and make the fanciest graphs, but if you’re not filtering out the data from the noise and presenting the right data in the right way, it is moot.

At SocialCops, we actively try to avoid this “dazzling dashboard” trap. One of our core values is “Problem first, solution second.” One way this translates into reality is that we focus 80% of our time on figuring out a problem, then 20% designing the solution (often, a dashboard). The more time we spend scoping out a problem statement, the less time we end up spending on implementation and the more effective the implementation becomes.

As someone who always jumps to execution when confronted with a problem, this was so counterintuitive to what I had known all my life. But my work with a project for the Child Investment Fund Foundation (CIFF) completely changed my mind. As the project went on, it became clear why it’s so important to figure out your users and the decisions they’re making before getting your hands dirty with crunching data or building dashboards.

In this article, I’ll talk about our learnings around how to build dashboards that our users will actually use and love in 8 steps, using CIFF’s story.

Step 1: Literature review: not just for academia

It may seem tempting to bypass the literature review, but it’s crucial to building a great dashboard. You can’t skip your homework before conducting user interviews. A literature review sets a strong foundation and helps you understand the problem statement in greater depth.

CIFF is a donor organization that works to improve the lives of vulnerable children and adolescents in developing countries through several child-centric programs. They had a very interesting problem statement for us — build a dashboard that would help them go in depth to evaluate the actual on-ground impact of their child welfare projects.

Of course, we knew about child welfare, but what does it actually mean? How is it defined? To set a good foundation for the dashboard, we extensively reviewed the child welfare literature, using all relevant material we could find — research papers, journals, and books. During this process, we first did the following:

  • Defined age brackets for a child (e.g. 0-1 year old is an infant, and 1-5 is a child)
  • Defined the scope and objectives of “child welfare” for this project
  • Outlined the sectors within child welfare, such as education, health, nutrition, and crime
  • Determined our index’s focus group — children who are vulnerable and need immediate attention.

We also examined the child-related indices that exist in India and globally. These indices showed us the important indicators that are being tracked on a global scale. This also gave us some interesting insights.

Dashboards
Screenshot highlighting the table of contents from our research report for CIFF

For example, the type of indicators used in a country’s indices vary according to how developed the country is. In a highly developed country like the U.S., child welfare indices might lean towards indicators like higher education, provision of social security, and education discrimination; whereas in India, indicators focus more on combating malnutrition, providing high school education, and proper sanitation facilities at schools.

Step 2: Always start with the user

The best dashboards start with the people who will actually use them. Identify the different user groups, the problems that they need to solve, and what they need out of the dashboard. The best way to do this is field visits and in-person interviews; though, if that’s not possible, calls help as well.

Our next move was understanding the users of the dashboard, what they cared about, and what information and insights mattered to them.

Since not all users would act the same or have the same objectives, it was mandatory to analyze each user’s personality, role and responsibilities. We interviewed different groups of users, then we mapped the user groups according to the decisions they needed to take using the dashboard. For instance,

  • The CMO and Chief Secretaries were interested in getting an overall snapshot of the status of child welfare in their state and comparing its performance with other states. Hence, for them, the dashboard needed to help inform and drive their policies and strategies.
  • The Project and Schemes Directors needed to track the pace and outcomes of their projects. So the dashboard needed to point to specific bottlenecks in their respective projects and help them influence on-ground activities.
  • CIFF and their partner NGOs were keen on understanding key priorities they should target in child-related sectors. This would help them fund and design interventions that cater to those priorities and their triggers.

Step 3: Questions first, not data

Start with the questions your dashboard needs to answer. Move from broad questions about the problem to a consolidated list of actionable questions that will point you to a solution. These questions will be at the core of your dashboard.

Rather than looking at CIFF’s data and figuring out how to display it, we next looked at CIFF’s needs (which came out during the user interview). What questions CIFF would need to answer to measure its impact? We went back to the drawing board and started with very fundamental questions: “How successful are the programs in achieving their outcomes?”, “What is the programs’ coverage?”, “How their state is performing as compared to India?”, and so on.

However, we didn’t stop there. These questions would help CIFF understand how they were doing, but not what they should do next. To make our solution more action-oriented, we went a step further: “What are the most vulnerable groups of children?”, “Where are they located?”, “Which programs and interventions are required in those areas?”, “Which interventions have been the most successful for them?”, “Where are the current programs breaking?”, “What peripheral factors are playing a role” and more.

Dashboards
A screenshot from our dashboard mockup for CIFF that is helping them track various programs and interventions.

At the end of this exercise, we all agreed that answering these questions would equip every dashboard user with the insights they need to learn which programs aren’t performing well, expose critical bottlenecks, and show potential next steps.

Step 4: Zero in on North Star metrics

Consolidate the set of questions into a core set of metrics or indicators, which give a high-level view of their sector or issue in a single glance. This helps users focus in on what really matters, rather than being distracted by a huge set of metrics and graphs.

The next step was to figure out how to answer these questions and reveal the on-ground reality in a data-driven way. The challenge — child welfare is a deeply complex subject. It teems with countless policies, laws and sectors, as well as a complex set of stakeholders (government departments and ministries, philanthropies, national and international agencies, and so on).

Dashboards

The challenge was not limited to the fact that there were multiple, diverse stakeholders involved. Another challenge was that the questions we wanted to answer couldn’t be explained by a simple set of metrics, as our stakeholders understood and measured them in different ways. Also, child welfare is anything but simple, so each question inevitably touched on multiple sectors. To solve for this complexity and bring together these questions and sectors, we created individual sector indices — one for each sector that affected child welfare, such as education and nutrition. We then combined these indices into a single index, the Child Welfare Index for the state, which would act as our single, unified North Star metric.

This index would be created by consolidating data from all the departments and development partners onto one platform. Thus, it would automatically take into account the different “defining measures” for different programs and present them in a clear, concise way. This would help the users understand their own parts of the puzzle and picture how those pieces are interlinked to the bigger picture of child welfare in their state.

Step 5: Storyboard how users will make their decisions

Take each user group, zero in on the decision(s) that they need to make, and create a layout for the visualization(s) or screen(s) that will reveal the exact insights they need. Go user group by user group to make sure that your dashboard will cater to every user’s specific needs. A rough sketch on a whiteboard or piece of paper can go a long way.

Next, we needed to create the “decision flow” for the dashboard, or how users would navigate the dashboard to come to a decision. Our user research came in handy as we storyboarded the decisions that the users would need to make. While brainstorming, we realized that the three buckets of users would need three separate types of screens.

Dashboards
Different screens help users navigate better on the dashboard and make to better decisions.

Let me give some examples to shed light on what this exactly means:

  • The Secretary of the Education Department would need to track the overall performance of the education sector in the state. For users like him, we designed an overview screen with sector-wise indices, along with the performance of geographies and key indicators.
  • The Project Director of Child Health under NHM (National Health Mission) would be motivated by the drop in the enrollment at aanganwadi centres, so he’d need to figure out the reasons for this and solve any roadblocks. For users in this group, we designed screens that measured project outputs (or outcomes), along with the inputs required to successfully obtain those outputs.
  • If CIFF and its partner organizations learn the factors that cause children to drop out of school (for example, child marriages, crimes against children, illnesses, etc.), they can address them through their projects and investments. For them, we designed screens with the primary KPIs and their trigger indicators to show how indicators were related across domains.

As we created these designs, we took feedback from the users on the definitions, the data sets, and more. Regular feedback helped us ensure that the dashboard was catering to their needs and requirements. We incorporated their suggestions and worked on any issues they pointed out — something that indeed saved us a ton of time.

Step 6: Find what data is available, and what it looks like

Data scoping can be tedious, but it’s important to know what data is available and what it looks like. Deep dive into the structure and indicators of all possible data systems to zero in on the final indicators. (If the datasets themselves aren’t available, you can learn about them by looking at the forms used to collect that data.)

The next task was reviewing what data was available for this dashboard. All the existing data systems from the state government database existed in private silos — there was no integration or hook through which these data systems could communicate with each other.

We knew we’d have to bring these data sets and systems together, so we first had to understand them inside and out. This meant finding all relevant data sets across different government departments, sectors, projects, and schemes.

We studied the characteristics of each data set we found, such as the source (e.g. government body, civil society, think tank), data type (e.g. MIS, report, survey), and format (e.g. Excel, CSV, PDF). This helped us get an inclusive idea of the available data sets and their metadata. We then deep-dived into the data sets one by one and learned about their indicators and variables. We focused on their granularity (state, district, block, village, beneficiary, etc.), disaggregations (by gender, age, caste, region, etc.), and the available years along with the periodicity (how frequently the data is updated).

The biggest challenge in creating an index is that the indicators in it need to be consistent. This can be tricky if they are collated from different data sources. For instance, they need to (1) be present at the requisite granularity, (2) be recorded for the same time period, and (3) all be consistently defined. So we did some basic data cleaning to deal with missing values and outliers or variances.

To complete our data scoping, we embarked our second research phase. We reviewed schemes and program objectives to finalize our list of input and output indicators. Additionally, we did hands-on research to formulate the list of child-welfare KPIs and their related indicators.

Step 7: Hone the methodology for North Star metrics

The next step is finalizing the methodology for the dashboard’s North Star metrics. This is the place where data scientists and statisticians should be consulted. They can help you figure out how to build a strong index, as well as what other analysis you can do with your indicators.

Now that we knew what data sets and indicators were available, we could create our final methodology for the Child Welfare Index. Making an index involves creating a list of indicators, and combining them in a mathematically logical way.

We handpicked the most significant indicators and normalized them. We started by giving all the indicators equal weights, as we believe all the sectors involved in children’s welfare are equally important. However, we plan to improve these weights by consulting industry experts and using statistical methods like Principal Component Analysis and Singular Value Decomposition for dimensionality reduction.

We also refined the index by disaggregating it with stratifiers such as gender, age, caste, and region. We also proposed calculating the index across time so that users could see how child welfare in different geographies changed from year to year.

Step 8: Build a prototype for user feedback and design iterations

You don’t need a final dashboard to see what your users think. Even a simple prototype can be a great way to kick off a feedback loop between user feedback and design iterations. Platforms like Invision or even an interactive PPT are sufficient.

Dashboard

Once we had a rough sketch of our storyboards, we started working on creating the mock-up, which is a representation or prototype of how the actual dashboard would look and function. We then brought the prototype to CIFF and the other stakeholders, and asked them to use it. This helped us to gain an in-depth understanding of whether they were able to make their decisions using the insights on the dashboard.

This process is also super helpful, as it gets the users intrigued by the problems and encourages them to share some really valuable feedback. In other words, the mock-up acts like a base to help them gain clarity. These feedback loops keep going on until everyone is convinced that the dashboard is just right.

Is this process worth the time?

Clearly, this dashboard scoping and design process takes time. It takes us 8 steps to come up with a user-centered design, and that doesn’t even account for the future data processing, cleaning and standardization, setting up the data pipeline, or building the actual dashboard.

So is this process worthwhile? We think it is.

It’s tempting to look at the data, pull out the best indicators and visualize them in the prettiest way, but too often that just leads to a dazzling dashboard that’s only used once or twice a year. Instead, zeroing in on the people who will use a dashboard and the decision it ultimately needs to drive can help you build a dashboard that your users will never put down.

In our case, this was so important because the dashboard itself is so important. Through this dashboard, CIFF and their partners will be able to track how their investments are helping curb child mortality and malnutrition; the government officials will be able to track how schemes like POSHAN and RJJSY are helping curb infant mortality and maternal mortality; and the Program Directors will be able to track which factors caused the quality of education in primary schools to shoot up. If this dashboard isn’t understandable and easy to use, CIFF and all their stakeholders will have wasted a lot of time and money without getting any closer to the answer they need.

Dashboard
A screenshot of from our dashboard mockup that will help CIFF partners identify and address problems based on their projects and investments.

In addition, we’ve found that an extensive scoping and design process helps us foresee — and prevent — challenges that will arise while we’re actually creating the dashboard. For example, extensive data scoping helps us understand what the final dashboard can and can’t include. Most indicators in DISE (India’s largest education data set) have lots of disaggregations, but those disaggregations aren’t available in other data sets. Without data scoping, we might have ended up with inconsistent indicators or a poorly built index.

Hence, this scoping process helps us prepare for these challenges and figure out how to tackle them before they derail our dashboard. It also helps us make the stakeholders aware of these challenges, so they’re ready and willing to pitch in with additional help or resources if they need to.


Interested in learning more about how to scope and design a great dashboard? Here are some resources on design thinking to help you get started:

Good luck with building your own intuitive, user-friendly dashboards!