Some of the News Fit to Print
ABOUT HIGHER ED
SCORECARDS GET AN A
California’s community college system on Tuesday unveiled Web-based “scorecards” on student performance at its 112 colleges. The new data tool is user-friendly and often sobering, with graduation, retention and transfer rates for each of the colleges and for the overall system, which enrolls 2.4 million students. The scorecards include breakdowns by race, ethnicity, gender and age. They also feature more than just simple graduation rates, with measures like the percentage of students who complete 30 credits and data on how students who placed into remedial coursework fared in comparison to students who were prepared to do college work when they enrolled. Experts on higher education data and proponents of the college completion agenda praised the new scorecards, saying they are both meaty and easy to understand. The article is in Inside Higher Ed.
COLLEGES BEGIN TO REWARD PROFESSORS FOR DOING WORK THAT ACTUALLY MATTERS TO THEM
David W. Szymanski wanted to work at a college where he could do what he does best: teach students how science can be used to solve real-world problems, help policy makers understand the link between science and the policies they create, and produce scholarship about teaching and learning. But he worried that the kind of work he does—much of it interdisciplinary and public-oriented—wouldn't amount to much in the faculty-reward systems in place on many campuses. What often counts most in decisions about promotions, pay, and performance evaluations is having lots of highly cited research published in well-known, peer-reviewed journals, and being able to win large amounts of grant money. But a growing number of institutions are adopting more-inclusive reward systems for faculty, with increased recognition for nontraditional kinds of research, service in local communities, and innovative teaching. The new systems are also creating avenues for professors to pursue work that matters to them without fear that they will derail otherwise promising careers. Mary T. Huber, a consulting scholar at the Carnegie Foundation for the Advancement of Teaching, says she's confident that new forms of scholarship will get the standing they deserve. "There is a growing cadre of people who have done scholarship in these new areas, and they will be the ones to educate others about it and serve as peer reviewers," Ms. Huber says. "If the work that's being done is actually meritorious, expands the imagination, expands knowledge, and improves practice, I believe it's going to win out in the end." The article is in The Chronicle of Higher Education (subscription required).
PEDAGOGY OF THE DEPRESSED
Joseph A. Palermo blogs in the Huffington Post: Whenever David Brooks and Thomas Friedman begin singing from the same hymnal you can bet the next public policy catastrophe is knocking at the door. This time around they've become boosters for online college courses as a panacea to cure the ills afflicting public colleges and universities. Brooks and Friedman's new interest in higher education means that Very Serious People are lining up to hand over yet another public good to the shock doctrine of privatization. When considering the condition of the nation's public colleges and universities these days the "shock" has already occurred in the form of defunding and manufactured budget "crises." Now the vultures are circling with ready-made "solutions" that also seek to turn a quick profit for private technology companies. But when private tech corporations, no matter how "visionary" they claim to be, begin to pilfer tax dollars earmarked for public higher education or meddle in faculty governance and make curriculum decisions detrimental to the mission, the amazing technological achievement that is the Internet, like any technology, can be deployed in a way that hinders rather than helps the wider society.
WHY EVALUATION SYSTEMS CAN’T IDENTIFY INEFFECTIVE TEACHERS
In 2009, the non-profit education reform organization TNTP (formerly The New Teacher Project) published The Widget Effect: Our National Failure to Acknowledge and Act Upon Teacher Effectiveness. This report spurred the redesign of many state and district teacher evaluation systems to more rigorously assess and address teacher effectiveness. Last week, the New York Times printed a preliminary assessment of the impact of these revamped systems. The result? After investing millions of dollars in data systems, training, and testing, the new evaluations identify roughly the same number of poor performers—one to three percent—as the old evaluations. Reformers cite the vagaries of test score cuts and evaluation norms as reasons why more ineffective teachers were not identified, but the overall trend of these new systems requires a deeper explanation. Evaluation systems still treat teachers like widgets—interchangeable in effectiveness—despite the fact that studies have shown individual teachers can have a significant impact on the academic trajectories of their students. Evaluation systems will fail to adequately recognize differences between teachers until we address the underlying issue of teacher hiring risk and turnover in high-need districts. The article is in The Georgetown Public Policy Review.
THE DOE’S TEACHER EVALUATION SYSTEM HAS OBVIOUS FLAWS THAT OUGHT TO BE CORRECTED BEFORE INITIAL IMPLEMENTATION
Howard Wainer writes this commentary in NJ Spotlight: The Department of Education states that the goal of its proposed teacher evaluation system, dubbed “AchieveNJ,” is to move “from a compliance-based, low-impact, and mostly perfunctory [evaluation system] to focus on educators as career professionals who receive meaningful feedback and opportunities for growth.” Who could be against changes that move us in that direction? Two natural questions to ask are: How well does the current system work? How much better is the proposed system? The road to reform is always difficult, but support for reform can be gathered, if, at each step along the way, evidence is gathered about the efficacy of the reform. Because the department has largely skipped these steps, the proposed system ignores the well-accepted paradigms of science. The goals of AchieveNJ are laudable, but these proposals should be viewed as tentative first steps. Before anything is implemented, a great deal more thought, resources, and time must be allocated to assess how well each innovation works.