Blog addendum for August 18, 2004

I received an e-mail today from Steve Lang, whose background is educational assessment and psychometrics, discussing the challenge of balancing summative and formative evaluation, as well as the implementation process:

In the pictures of a dichotomy between e-folios used for "summative assessment" as opposed to "growth/reflection", I wondered three things:

First, can the e-folio process be more circular in description and assessment? In other words, if there were a self-selected, reflective portfolio without specific structure, that showed the necessary criteria (by chance) for "mastery" by some rubric, etc., then that's great...but the assessment job is to select out those WITHOUT "valid" evidence and either have them do it again or substitute a summative assessment at the close of instruction? Likewise, a structured showcase portfolio used for "assessment" might be cycled back for more reflection, creative elaboration, and personal focus - even after assessment. How can we adapt the processes to serve both needs? In other words, do the two "philosophies" differ in how they cycle, revise, and redo the portfolio? I suspect your involved work with your students to create the examples you gave us involve much back and forth review - and we haven't provided much direction for the cycle (number of times, more convergence over time, etc.) for the folios used in an assessment process. Likewise, we haven't provided intermediate rubrics for scoring the interaction or process. There are lots of issues in the cycle step.

Second, maybe there can be some more creative ways to validate the "growth/reflection" portfolios with more job-related impact artifacts (focus group results, samples of impact on others outside of the portfolio, etc.). By being less egocentric and more impact oriented, the "reflection" is more scorable. By the same token, can we avoid the interrater reliability bugbear by focusing on other estimates of "lack of error" and "confidence" such as separation reliability? This gets into some psychometrics, but I think it can be demonstrated that portfolio (and performance assessment in general) need to move away from some tradition methods (correlations). For example, the main focus of portfolios might be growth - purposefully showing poorer and stronger artifacts, showing examples of things that one self-assesses that are NEW skills for the creator and learner (in the case of teachers), etc. Another focus might be high-level or creative thinking. Could we alter the "directions" for portfolios creation to avoid any type of conclusion and steer the creator to show continuous progress without an end? To use modern measurement methods, the artifacts need to be calibrated on a ruler of some kind from less to more of a conceptualized trait. Two different portfolios may indicate different "slices" of change on a construct where the individuals have little content in common within the artifacts submitted.

Third, if we choose an "assessment" portfolio and use it for that purpose, can we then revisit the portfolio and use parts of it in another step to achieve the "growth/reflective" process that is attractive to many faculty? How does our conception of e-folios preclude a dual process in the same activity? Can students and faculty conceptualize the differences enough to combine the different philosophies in one activity?

My response to him:

It seems that if we can reconcile this dichotomy, between assessment OF learning and assessment FOR learning, we can make a great contribution to the field. I attended a symposium this summer on Vancouver Island where these issues were discussed at length, and plans put in place for implementing assessment FOR learning in K-12 schools from Hawaii to Maine and all parts of Canada. Anne Davies led the retreat and I was privileged to be one of the resource people.

It seems to me that we need to explore strategies that both support students' deep reflection on their learning, while still giving institutions the data they need to safeguard the public. I represented that concept in this image:

What I tried to represent here is the differences between the floor, or the minimum competencies that we expect all teacher candidates to attain (did you call that the safety net?). You maintain these records in your database as you described it in your workshop at AACTE. That assessment management system is the defensible minimum for licensure. But we want students to demonstrate more than the minimum, which provides the portfolio its critical role; providing the teacher candidate an opportunity to both reflect on their growth as an educator and to showcase their best work for a potential employer. In that way, the portfolio becomes high stakes for the teacher candidate, since it helps them get the job they want. The digital stories are really opportunities for teachers to expand on their reflections and learn from their own experience. Why do a portfolio electronically if we don't use the power of the technology to really tell the story of their learning. That story is much more powerfully told in the learner's own voice, with visuals to enhance and illuminate their spoken words.

I was at a conference last month for the Council of Independent Colleges, and one of the participants raised the issue of evaluating text on screen, which is what most of these commercial systems become... digitized versions of the printed page. There were two issues for her: the eye fatigue of reading so much text on a computer screen to evaluate these portfolios and the boredom for her as a reader... she said she felt like falling asleep while reading these portfolios. Right now, the state of the art is primarily putting text on the screen. If faculty are bored, what about the students? I imagine that was what impressed your deans so much... the examples I showed were much richer than show up in 3-ring notebooks and also than show up in most commercial systems, which are designed to match artifacts with standards,

You raise great questions about the process, which I think is where these concepts get tested. You have identified specific issues that need to be addressed by faculty as they go through the process of giving feedback to teacher candidates on their work throughout their teacher education program. I am more and more convinced that for faculty to truly support the portfolio development process, they need to model it. Faculty need their own reflective electronic portfolios, so that they understand where their students need support and feedback. In my opinion, we are trying to implement a strategy that many faculty members (and deans?) have not experienced outside of the tenure and promotion process, a high stakes environment which in my experience does not follow the portfolio development process. That is why we are struggling. But out of crisis there is opportunity (isn't that what the Chinese philosophers say?).

That is why I conceptualized a single digital archive of the learner's work; the digital artifacts can be used for two purposes: high stakes assessment for licensure, and reflective portfolios that tell the learner's story of growth and change over time. We need both. The challenge is, as you point out, in the process.


© 2004, Helen Barrett updated August 17, 2004