At Education Analytics, we’re trying to do more than just “bridge the gap” between research and technology—we’re trying to create a new kind of organization that leverages the strengths of each field to solve the most pressing problems in public education. It may be too early to say whether we’ve been successful at this, but we’ve certainly seen some successes. Over the last 12 years doing work to build and scale rigorous analytics in education, we’ve lived the reality that researchers and technologists often talk past each other, and we’ve observed the cultural differences between these fields that make it difficult to translate between them. We think it’s critical to our mission that these fields of experts collaborate, communicate, and cooperate if we are going to solve the intractable problems preventing education from being an information-rich, evidence-driven industry. Here, we share some of the lessons we’ve learned as we’ve worked to build a non-profit research and technology firm.

The EA Research Story

EA’s roots stem from the work of our founder, Dr. Robert Meyer, at the University of Wisconsin-Madison. Many of our longest serving staff worked at a research center at the university before Education Analytics was founded. In line with the standard academic staffing model common in research centers, PhD-trained scientists led individual projects with support staff like research assistants. This worked well for developing expertise within projects, but it was a huge obstacle to being able to efficiently scale. Ultimately, the goal was to do high-quality work economically for our publicly funded education agency partners. But the reality of the standard academic model did not naturally lend itself to efficiency and cost savings, because even though different projects had the same general goals, data types, statistical models, and desired outputs, each project had completely different code, documentation, and approaches.  

When EA was started, we decided to structure ourselves a little differently than an academic research center even as we continued to deliver research services. We believed early on that to scale our impact, we needed to organize ourselves like a software company, even if we weren’t really building software (yet). This structure in turn informed our norms and our culture. For instance, we honed in on standardizing what could be standardized, things like creating reusable code packages for common analyses, using the same open-source software for coding, creating and enforcing code standards, standardizing data after acquisition, and more. This drastically increased our efficiency, which not only reduced costs for education agencies, but also freed up our staff’s time to do less manual data wrangling and more creative problem solving, partner support, and development. 

During EA’s early years, this recipe generated a lot of value for our partners, who were receiving research-grade analytics with software-grade practices at a lower cost than other organizations were able to provide under the academic model. But integral to that success was preserving our roots in academia, efficiently leveraging the deep technical expertise and training of our research scientists, and ensuring researchers remained deeply involved in the entire project lifecycle. 

The EA Technology Story 

Over the years, the policy landscape began changing in ways that shifted the market for analytics in K-12 education. No Child Left Behind was replaced with the Every Student Succeeds Act, and this infused the field with an enormous amount of creative energy to go beyond performance metrics based solely on annual, standardized state assessments. We found that our partners started asking us not just for our bread-and-butter student growth metrics, but a multitude of other analytics and research projects.  

As the scale and diversity of our research and analytics projects grew, so too did the problems of data quality. Our research and analytics were only as good as the data we had to use, and with each new type of project, too much of our effort was spent cleaning, conforming, merging, and reformatting data coming from various disconnected, messy source systems. A pivotal moment for EA and our emergence as a technology company began when we started reaching “backwards” into the data pipeline, to not just be on the receiving end of data extracts from source systems, but really trying to understand the structure and formats of the data. More and more, we became thought partners to our education agency clients about how to not just analyze their data, but better structure it and set it up for analytics. 

During this phase, it became clear to us that until we (as a field) figured out how to free education data from the one-off, siloed, proprietary data systems that defined the fragmented ecosystem, then the cost-effective, timely, and rigorous information that decision makers need to actually improve the quality of services provided to students would never be fully realized. By way of necessity, we built capacity in data engineering to help build data systems that could support the rigorous research that our partners were asking for.  

We then became part of a burgeoning community of technologists working to standardize education data via interoperability, becoming active participants in communities and standards bodies like the Ed-Fi Alliance, Common Educational Data Standards, and others. These communities have been crucial in building capacity for us and for the field in developing the technology and governance structures needed to make interoperability a reality in K-12 education.  

Importantly, most of the work that was happening was focused around optimizing school systems’ use of operational data, meaning the day-to-day data that schools were processing and using as part of their workflow. At the time, the data being standardized for interoperable systems were not being structured, formatted, or used for analytics or research. Although a major (perhaps primary) use case of interoperable data is for operational use and mandated reporting to states and the federal government, the immense potential of interoperable data for analytics and research is only just beginning to take shape.  

To fully realize this potential, we must bring the fields of research and technology together. We know there is immense expertise and knowledge out there from researchers and technologists alike that will help us accomplish this union, and we want to share what we’ve experienced to contribute to that knowledge base. 

What Have We Learned So Far?

Building a research-grade analytics firm that embraces software development practices has been difficult. Very difficult. Why is this? A primary reason is that there are fundamental cultural differences between these two domains. Below, we summarize how this manifests across several dimensions.  

We see these differences surface all the time in our collaborative work. They create productive tensions that force us to consider the first principles of what we are trying to accomplish, evaluate tradeoffs, and make decisions informed by diverse stakeholders. But of course, they also cause conflict.  

This could look like different values, with a researcher prioritizing testing a model several different ways to assess the robustness and sensitivity of their statistical models, but a technologist prioritizing having the final model in hand early enough to build efficient data flows. Or it could look like a researcher documenting their methodology in a long-form white paper or manuscript that needs to undergo a lengthy peer review process, whereas a technologist’s documentation might look like a feature proposal with bullet points and diagrams.  

These differences do indeed cause tensions and conflict that need to be worked through and resolved. But there are also similarities across the two domains that we can (and should) leverage to better integrate these fields to work towards a common set of goals: 

Impact 

Both fields value impact. Education researchers and education technologists want the work they do to make a positive contribution to society, to help improve student outcomes, and to make a difference in the lives of students.  

 

Innovation

Both fields value innovation and novelty, especially in terms of finding new solutions to existing problems. Researchers explore new theories, derive and apply new methodologies, and use multiple perspectives to understand educational phenomena. Technology drives innovation through the development of new tools, new applications, and new systems that improve educational practice.  

 

Technical Expertise

Both fields value and reward technical expertise. Quantitative researchers require deep expertise in relevant theory, data collection, statistical analysis, and causal inference. Technologists require deep expertise in software development, systems integration, technological evolution, and user experience design.  

 

Quality

Both researchers and technologists define quality as something working or being provable in the initial context, and something being replicable and generalizable. Researchers use terms like robustness checks, validity checks, replication, statistical controls, sensitivity analyses, power analyses, and peer review. Technologists use terms like prototyping, A/B testing, unit testing, integration testing, quality assurance (QA) testing, and audits. All of these are various methods for testing and ensuring quality. 

 

Peer Acceptance

Though it can take different forms, both fields have checks and balances embedded in their practice that involve fellow experts. Researchers rely on peer review at multiple stages of the scientific process, including when evaluating the merit of research proposals, ethical review of proposed research studies via Institutional Review Boards (IRBs) or ethics committees, submissions to professional conferences, and review of manuscripts to journals. Technologists rely on peer review during multiple phases of product development, including during requirements gathering, review of design elements like wireframes and mockups, regular code review, and beta testing.  

 

Data-Driven

Both researchers and technologists are empirically driven, and therefore rely on data. Both fields want the volume and quality of data to be ever increasing. Importantly, technologists often prioritize consistency and accuracy of the data, whereas researchers often prioritize validity of the data. This can lead to debates and disagreements about what it means for data to be “right.” 

 

Individualism 

Both fields value the “individual expert,” and are structured in ways that reinforce that value. This can lead to larger-than-life egos, interpersonal clashes, and a sometimes competitive environment.  

 

Knowledge of The Field

In both research and technology, the field coalesces around a particular set of theories as the most up-to-date set of open questions to be answered. At a given point in time, “camps” emerge that ascribe to one major theory or another. Being unaware of these camps—either present day or from the past—can undermine your credibility within that culture. 

The Way Forward

If we want to not just “bridge” but truly integrate these fields, we must start viewing researchers and technologists as two equally critical ingredients needed to build a high-quality ed tech product. Our organizational philosophy has always been to forego the clarity that comes with a one-size-fits-all policy in favor of flexibility by team, given we have teams that come from different fields, with different norms, and different environments needed to support them in good work.  

But we’ve also seen how this can lead to siloing not only between these two fields, but even fragmentation within them, as potentially arbitrary “micro cultures” crop up among those who share certain surface-level features in common, like a team name or a title. Our structural and operational design needs to constantly balance—and rebalance—flexibility within the teams and cohesion across them.  

This can look like: 

  • Zooming out when examining a problem to assess how it impacts not just one “side” but all of our technical experts, no matter their field 
  • Focusing on philosophies to guide thoughtful decision making rather than policies that dictate thoughtless decisions  
  • Designing teams and structures that surface conflict and tension in ways that can lead to productive conversations about tradeoffs, rather than trying to hide conflict, problem solve it away, or leave it up to an individual to decide which side of the conflict “wins” 

We haven’t gotten it all right yet, but we still deeply believe it is possible, and in fact, essential, that we continue to push ourselves and the education field to bring the best of research and technology cultures together if we want to finally tackle the hardest problems facing public education.  

Interested in working together on interoperability-enabled research?

We love to find great collaborators who want to learn and innovate together. If you are interested in exploring how interoperability and research can change the field of education, get in touch.