Starting with the End in Mind

When dashboards come up in conversation, the focus almost always turns to the visuals — the charts, the colors, the user interface. And while those elements matter, they are only part of what makes a dashboard useful and usable. A dashboard is only as good as the data behind it, and the sourcing of the data is often overlooked. On the Rally product team, we’ve learned that the real work of creating meaningful dashboards begins long before it is visualized in the platform. 

If we want dashboards that genuinely help educators, we must start with the end in mind: what data should be included, how it should be displayed, and why it will be useful for the people who rely on it. Rally is meant to provide a snapshot of student data, with one key area being assessment data. Rally is not meant to replace the detailed reports provided by assessment vendors. Instead, it is meant to provide an effective overview of that data. Therefore, Education Analytics (EA) strives to make well-informed decisions on what to display so teachers and administrators can access the right data at the right time to make the right decisions

 

Getting Data to Rally 

Rally relies on the Ed-Fi data standard to source its data; however, not every assessment vendor is sending its data through Ed-Fi. Loading assessment data is not a simple process of “dropping scores into the system.” It’s a thoughtful practice of interpretation and collaboration. EA has developed streamlined processes for bringing assessment data into Ed-Fi, using tools such as earthmover — for transforming data — and Runway — for moving it efficiently and securely. These tools allow EA to fit tabular assessments into the Ed-Fi data standard, allowing assessment data to feed downstream applications like Podium and Rally. 

The Role of Bundles 

Every assessment is different; each one measures distinct skills or concepts and presents results using its own format or reporting style. K–12 assessments measure a wide range of student skills and knowledge — from kindergarten readiness (KRA) to college readiness (ACT or SAT) — and everything in between, including subject-specific achievement, academic growth, gifted abilities, English Language proficiency, and more. Additionally, each assessment has a unique approach to scoring. Sometimes, vendors emphasize different types of results, such as scale scores, performance levels, percentiles, or a combination of all three. This means that the data files assessment vendors provide vary greatly. Some provide just a few key top-level scores in their data files, while others might provide detailed information like item level responses.  

That’s where assessment bundles become important. Bundle development is the behind-the-scenes process of ensuring that data from an assessment vendor aligns with the Ed-Fi data standard. This is where key decisions are made about which data points to include. Typically, EA loads a significant amount of data into Ed-Fi and our Stadium warehouses to support a variety of use cases, though this is often less than what is provided in a vendor file. These additional data points might be helpful for research, analytics, or accountability purposes. 

Being Selective for Visualization 

Because Rally serves different use cases than Stadium, the amount of data loaded into a Stadium warehouse is often greater than what Rally needs. Rally takes additional steps to refine and standardize the data, so functionality remains consistent across the platform. The Rally team makes important decisions about what to display and how to display — such as which color represents the highest performance level, which metrics to include in display, or how to sort objective assessments. Different assessments also call for different visualization methods: state summative and AP exams align well with performance levels, making stacked bar charts effective, while continuous scores like SAT or ACT are better shown with histograms. Eligibility data adds another layer of complexity. Often, we only know who took the test, but not who was expected to take it. Without careful consideration of how participation is displayed, dashboards can become confusing or misleading. 

Sample view of Rally's assessment dashboard at the student-level.

Scales also matter. A histogram with a wide range of distinct scores may look great when you’re dealing with 30,000 students across a state, but it loses impact in the context of a single district, school, or classroom. And then there are the quirks of each vendor. For example, an AP score of “5” carries a very different meaning than a “5” in another assessment. Rally must preserve those nuances while keeping visuals clear and consistent.

How We’re Tackling It in Rally

To address these challenges, we lean heavily on vendor language so that the terms teachers see in Rally match what they already know from official reports. We also rely on assessment bundles as our source of truth, creating a standardized foundation that ensures consistency across very different types of assessments. Visual flexibility is key, too. A stacked bar chart might be necessary to display the overall score, while a histogram might be required for a sub-score. At every step, our priority is an educator’s needs. We always build a user story that asks what an educator is expecting to see and why. This builds empathy in our team and focuses on the needs of our users. We are then able to build acceptance criteria which detail the specifics of how an assessment is displayed in Rally. 

Looking Ahead 

Our focus is on refining how we load and map data so that we continue to balance vendor integrity, educator usability, and product consistency. We want to involve educators more directly in this process, so our assumptions get tested earlier. We’ll also keep encouraging more vendors to adopt Ed-Fi standards and building tools to facilitate that process.  

 

Lessons Learned 

Through this work, we’ve learned a few important lessons: 

  1. Assumptions made during visualization design often break down once they’re tested against real-world data. What looks great in a mockup doesn’t always translate well given the complexity of vendor assessment data.
  2. While standardization is powerful, we must carefully consider how to apply those data standards to ensure the visuals communicate effectively.
  3. Iteration with real users is not optional. Educators know best what’s meaningful, and their feedback is imperative.
  4. The real value isn’t showing every possible piece of data — it’s showing the right data in the right way. 

At the end of the school day, dashboards aren’t defined by their charts or their color schemes. They’re defined by the choices we make about data long before it becomes a visual. Rally’s promise isn’t to replace vendor reports, but to make assessment data usable, comparable, and actionable. By focusing on the inputs as much as the outputs, we can create dashboards that deliver genuine value for educators, administrators, and, ultimately, for students. 

At Education Analytics, we believe the best dashboards start with collaboration.

Let’s work together to make sure the right data tells the right story. Contact us to learn more about how we can support your data integration and visualization goals.

contact us!