Dear CUNY-PSC, In a recently published post, Brightspace: progress or giant headache?, by Ari Paul, the union acknowledges workers’ concerns over CUNY’s transition to a new Learning Management System (LMS), from Blackboard to Brightspace. According to CUNY, the transition will bring with it many great benefits, including an intuitive and easy to learn platform, the expansion of online learning, and improvement in student outcomes through “advanced analytics.” So far, it seems the union is only refuting one of these claims, that Brightspace will be intuitive and easy to learn, citing faculty concerns over the amount of time and labor that will be required to learn a new system and to transfer data from one platform to the other. The union is raising alarms over the amount of unpaid labor that the transition will require, especially for adjuncts who simply will not be paid, higher education officers (HEOs) who will go with unpaid overtime, and full-time faculty who will receive no additional compensation or release time.These concerns are valid, worthy, and deserving of serious attention from the union and rank-and-file members. However, the transition to Brightspace brings with it other concerns, including over user data, privacy, and online learning. The union has long voiced concerns over online learning, especially with the roll out of CUNY Online, but have yet to seriously consider how Brightspace plays a role the expansion of online learning at CUNY and how it signals the direction CUNY might be headed in unless swift and serious action is taken.
Across the higher education landscape, data brokering is becoming more important in university life. Recent attention on machine learning and artificial intelligence, for instance, have been geared towards student interaction with these tool (i.e., automating assignments) or teaching applications (i.e., grading assignments), but less attention has been paid to the use of machine learning at the administrative level, especially in applications that shape student outcomes and generate predictive models.
One recent example of the use of A.I. in the administration of higher education comes from CUNY John Jay, which found itself in media headlines after raising its 4-year graduation rate from 54% to 86%. At the college, administrators used predictive modeling to determine “at risk students” in need of intervention, roughly 200 out of every 750 assigned to academic advisors. Rather quickly, students received a “support score” that helped determine who needed increased intervention, academic advisors moved in, and graduation rates soared. With the success of the program came serious funding, and by the following fall an additional $1.1 million had been provided by Google to expand the program to six more campuses.
The increased graduation rate at John Jay should be celebrated, and the students who might not have crossed the finish line last spring otherwise are sure to be thankful. However, nestled in this feel-good academic success story lie some reasons to pause and reflect, listed below.
First, is the question of data privacy. As one academic advisor told the Times, “we don’t ever want to tell students they’re at risk.” Students are not aware that their data, gathered across a variety of intake forms and university data gathering tools, is used to generate profiles on them and assess their potential for success—this is data that students are effectively forced to hand over, through admissions, financial aid, and registration on learning platforms such as Blackboard and Brightspace. It is worth considering how students would feel, or what they might have to say, if they knew that massive amounts of their personal information—such as demographic data—was being gathered to rate their chances of academic success. On platforms like Brightspace, the breath of data gathering is expanded to new heights, and could potentially include the most minute data such as information on long-ins, time on a given task, total time on platform, and number of peer interactions. Second, access and ownership of data remains an open question. CUNY would like to pretend this is not the case, and that we all assume best intentions and believe in the benevolence of our leaders when it comes to data gathered on students and faculty. CUNY pre-empted this criticism in its own “request for proposals” (RFP) for a new LMS, by including in their initial request the following language, “The contractor’s solution must affirm CUNY’s ownership of data and faculty member ownership of their own intellectual property.” However, CUNY’s ownership of data generated by Brightspace does not hold CUNY itself sufficiently accountable. There is no guarantee that CUNY will properly manage student data, or that it won’t license it out, or even give access to third parties—as was the case with the development of John Jay’s predictive model, which was created outside of CUNY by the non-profit data science org, DataKind. Third, is the question of appropriate use. John Jay’s model used 75 risk indicators to help predict a student’s chances of graduating, but what happens when the use of predictive models is broadened to determine questions like, “who is likely to succeed in STEM?” A 2021 report from the nonprofit news org, The Markup, details how hundreds of universities across the country use race and ethnicity as risk factors in determining the chances of success in a variety of academic outcomes. This data is then used to steer Black, Latinx, and indigenous students away from certain career paths and majors, especially in STEM where these students have already been historically underrepresented. At CUNY, predictive modeling could have serious implications for student trajectories, not only in the career paths and majors students are encouraged to explore, but also in the colleges they are encouraged to transfer to at the 2 year level, or apply to at the graduate level. Scores would then shape how CUNY personnel come to see and relate to the most marginalized students, hiding under the guise of statistical analysis to view student of color especially as less likely to succeed and, ultimately, as less deserving.
Fourth, learning conditions are working conditions. At this point, it is worth considering how the precarious situation students face above relates to CUNY faculty and the CUNY-PSC. The proposition here is that faculty should be in solidarity with students and understand that what is done to students one day is sure to be done to faculty the next. For instance, what stops CUNY from gathering data on adjuncts, professors, and administrative support staff to generate predictive scores around work and productivity? Or what if predictive modeling is used to assign courses, distribute career advancement opportunities, or to determine hiring? Perhaps worst of all, what if predictive modeling is used to determine which programs, at which campuses, are worth funding? What if all these data-driven activities are already, in some form, underway but without the express knowledge of CUNY staff? What should be clear by now is the need for regulations and protections, and some means by which large public institutions can be held accountable. What CUNY does with the data it gathers, in the name of those it gathers data on, calls for a greater level of transparency and accountability. The CUNY PSC can be a leader in this fight, which is sure to only become more relevant in the coming years, by engaging more critically in the fight over big data and its uses in the university system, and it should start codifying protections as early as possible, whatever they may be, in order to ensure the best working conditions for its members.
Recent Comments