For better or worse, NCLB has forced public schools to be data driven. School leaders think hard every time they make a purchase, condone a new course, or approve a field trip request. And it’s not just because their purchasing budgets have shrunk by 50% in each of the last three years.
School boards, accreditation organizations, and perhaps, most importantly, local taxpayers, want to know how the decisions school leaders make will effect the bottom line of the schools they oversee. Only, the bottom line in the education game is not profit, it is student outcomes.
Of late, student outcomes has come to mean scores on standardized tests. Forgive me. I can’t mention standardized testing without at least conceiving of a diatribe about the damage we are doing to our citizenry with an inordinate focus on bubble tests. I am not alone in this perspective. I have even asked the technologist community for some help on this front in a prior post. That’s as far as I will go with my diatribe in this post, though. On to how we should measure the value added by edtech..
Here is a short list of student outcomes that can be measured, and that I find worthy of our efforts to quantify.
- Argumentation, both written and verbalized.
- Mathematical problem solving skill.
- Scientific conceptual understanding.
- Executive function.
- Creative execution.
- Social networking capacity. (No, I don’t mean counting friends on FB.)
- Moral reasoning capacity.
There are certainly others I have left off the list, but this suffices for my discussion. No one would argue that a student who develops all of these capabilities is likely to be a contributor to society.
These eight areas of skill are not easily measured. Argumentation can not be measured on a bubble test. Not even the most sophisticated A.W.E.
in the world can score a piece of writing for the coherence of the argumentation. Social networking ability can only be measured by someone involved in the subject’s community in an ongoing and significant way.
It is impossible to quantify with a simple bubble test, the skills most needed for success in the adult world. Yet, it is improvement on bubble tests that we are all looking for when evaluating new edtech products.
How do we measure the value added to education when a student spends four hours creating a Prezi
for their young entrepreneurs class, outlining the business plan for the cottage-industry-style product that they will, in fact, produce, market, and sell over the course of the school year?
How do we measure the organizational skill a student develops by using a tablet with electronic binder software
that encourages them to establish hierarchical folder structures in their classes to better retrieve information when they need it?
How do we measure the networking skills gained by a student, trained in school on digital citizenship, who applies those lessons when using a safe online social networking site
to connect with classmates and teachers in a virtual collaborative environment that extends beyond the school day?
There are certainly ways to measure these things. And it would behoove schools to do so. Unfortunately, none of the gains will show up on a school accountability report card
Some edtech tools will directly impact student scores on bubble tests. One example is adaptive math programs
. At the K-6 level, in particular, where mathematical algorithms are relatively simple, bubble tests are a crude way to gauge value added.
There are myriad edtech tools with instructional value that is not easily quantified. We should not dismiss the difficult to evaluate tool for the same reason that we should not eliminate after-school sports, band, and art programs. Valuable investments sometimes take years to mature.
I do not wish to excuse all startup founders from attempting to quantify the educational value of their product. Rather, I challenge founders, school leaders, and teachers to find meaningful ways to demonstrate the value of their edtech interventions.
With the coming ubiquity
of ever-present, smart electronic devices will be myriad edtech applications that will allow educators and students greater flexibility, creativity, and functionality in their work. If we wish to take full advantage of this abundance we will need more meaningful and complex evaluation structures to examine the effectiveness of an individual application.
Now, it is time for me to tag, tweet, plus one, like, re-post, and otherwise share this article with my virtual pln so that I can receive feedback from my peers around the world. I wonder how I would quantify the value of that?