Iannelli: Feedback system needs clarity

Does Temple’s student feedback database provide students with enough info?

jerry iannelli
Justin Smith TTN
Justin Smith TTN

For Rebecca Alpert, free minutes are few and far between.

As if life as a professor of religion at Temple wasn’t enough, Alpert is a certified Reconstructionist rabbi, a regular lecturer at Ivy League schools across the Tri-state area, a renowned expert on sexuality in Jewish history and a habitually published author of scholarly articles. She’s written four books, on subjects varying from Jewish lesbianism to African-American baseball players. The latter, “Out of Left Field,” was published by none other than Oxford University Press. She has her own Wikipedia page.

When Alpert and I spoke, she scrambled to leave by 5 p.m. in order to host a lecture in Baltimore that night.

Despite her relative workload, Alpert said she still takes time to personally grade every assignment the 100 or so students in her Religion and Human Sexuality in the East and West lecture place on her desk, semester-in and semester-out. This, of course, is something the nondescript ratings provided by Temple’s new student feedback database cannot properly convey to anyone.

Which begs an interesting question: Is there actually a way to quantify “good” teaching?

Temple, like many other schools, collects feedback from students at the end of each semester, asking them to rank how well each of their respective professors handled grading, course materials, assignment feedback and sensitivity to diversity, among other topics. The university began switching from paper feedback forms, mandatorily filled out in class under the forceful hand of professors university-wide, to identical online questionnaires in 2011, completed at students’ leisure outside of the classroom.

Peter Jones, the senior vice provost for undergraduate studies, alerted students via email on Nov. 21 that forms for the Fall 2013 semester were live and online.

For the first time in university history, said feedback forms have been compiled into a database for all students – provided they’ve filled out their allotted share of teacher reviews – to view online in order to properly gauge which classes to take from this semester onward. Professors are ranked based on individual courses they’ve taught semester-by-semester in four aspects: feedback, grading, teaching and learning.

Instructors are given rankings ranging from one to three in each of the given fields, the ratings displayed in a vertical stack of color-coded squares like some sort of primitive audio spectrum analyzer. While professors privately receive open-ended written responses from students, that information is not provided in the public database. The forms and database are managed by the Student Feedback Forms Committee, a 14-person group chaired by Jones.

On the surface, it’s certainly easy to gloss over the database as a positive step toward transparency and improved student decision-making, but a lingering question remains: Are students actually equipped to evaluate their own learning?

“Sometimes you don’t know until you get some feedback [long] afterwards,” Deborah Stull, an assistant professor at Temple’s biology department, said during a joint conversation with Alpert and Faculty Herald Editor Steve Newman. “I teach a writing course where I get the most fabulous emails that say, ‘I really hated that course and everything about it because I hate to write. But boy am I glad I took it, because it was actually really useful, which I didn’t realize at the time until eight months out and I had to do X, Y or Z.’”

“The feedback forms assess right as you’re finishing the class,” Newman, also an associate professor of English at Temple, added. “This doesn’t mean that we’re going to throw up our hands and say there’s no way of assessing teaching, but it does suggest that some of the value of what you do as a teacher can’t be known immediately.”

This makes sense on a basic logical level, as an “unfair” grader at the end of the Spring 2012 semester may have, in fact, just been mirroring the way a thesis defense panel would act when presented with the same material, but the student may very well not realize this until he or she is three years into a master’s program.

Third-party feedback sites like the widely-used RateMyProfessors.com work to bridge this gap via lax time constraints and open, anonymous commenting, but a study published in the peer-reviewed online journal “Practical Assessment, Research and Evaluation” in 2007 maintained that there is no guarantee that information provided on the site is relevant, up-to-date or even accurate, especially in comparison to university-operated feedback programs. The study recommended that universities post their feedback data online.

“The question is, how reliable and valid is [Temple’s] information? What’s being lost?” Newman asked rhetorically. “Well, the answer we often get is, ‘It’s better than RateMyProfessors.com.’ Well, that’s true, I suppose, but it’s also a low bar.”

The site’s “easiness” and “hotness” ratings, though useful in their own ways, don’t exactly scream “academic integrity.”

Newman said many faculty members are “highly skeptical” of the feedback form process due to fears that students reward “easy graders” as opposed to those that impart the most learning. He also said the Student Feedback Forms Committee is well aware of these potential issues, and that adding some form of qualitative component to the databases would be incredibly resource-heavy and potentially violate student privacy rights.

In other words, this was the best the university could do from the get-go.

Students relying on the new database to schedule classes for the Spring 2014 semester are merely left with a smattering of tiny cubes that offer no insight as to why or how a professor was given a two-bar teaching rating as opposed to three.

To become a truly useful student tool, the database needs a heavier breakdown of professorial criteria and a wider range of potential scores in order to make up for its lack of personal comments or qualitative data.

It’s certainly a more useful tool than none at all, but graders who scored “threes” on the scale may have done so for wildly different reasons.

“I don’t like my [teaching assistant] grading,” Alpert added during our conversation, explaining why she still takes the time to grade weekly assignments in a massive lecture on her own.

“I want to know what they learned!” she exclaimed.

Which one of those cubes defines “passion?”

Jerry Iannelli can be reached at jerryi@temple.edu or on Twitter @JerryIannelli.

A note to our readers –

In light of changes to Temple’s student feedback system, The Temple News and the Faculty Herald have come together to have a conversation about feedback forms and teaching quality at the university.

For more information, visit temple-news.com/opinion and the Faculty Herald’s website, temple.edu/herald.

Be the first to comment

Leave a Reply

Your email address will not be published.


*