A little cottage industry seemingly arises at the conclusion of each decade, joyously pointing out those long-since forgotten, failed techy items from the past 10 years that were supposed to impact the world but were miserable failures instead.
While we are only at the midpoint of the 2020s, it is safe to say AI will not be the next Google Glass, 3D television or the loads of other mainstays on the 2010s lists of IT infamy.
Higher education quickly realized both the potential AI positives and negatives as it applied to the teaching, learning and academic research space (think plagiarism on one hand matched against the prospect of personalized learning on the other). Underscoring this fact is the groundbreaking recent announcement that the California State University System intends to become the nation’s “first and largest AI-empowered university system” (https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powered-Initiative.aspx).
However, AI adoption for administrative tasks – providing desperately-needed help as struggling institutions look to lower costs, attract/retain more students, and obtain external support via fundraising, grants, etc. – has been a little more deliberate.
But this is changing fast, as it seems every higher education information system vendor is now flexing its AI muscles – or at least the sales and marketing teams are doing so. Phrases like ‘Throw your CRMs into the trash bin because mine innovates using AI’ or ‘I’ll see your legacy registration system and raise you a machine language course schedule wizard’ are lurking in that sea of PR if you read between the lines hard enough.
The fear of missing the AI train must be balanced because higher education cybersecurity and data privacy risks because AI requires data and that’s where things get complicated.
Higher education is always among the most vulnerable industries because its data is so valuable to cyber attackers, and it is considered an easy target. No industry has the combination of user churn, number of inexperienced and casual users, the plethora of personal devices, and an overriding culture of openness. Couple it with IT budgets and staffing often facing unprecedented challenges and it is a mix that attracts bad actors from across the globe. The increasing AI usage will likely bring even more frequent, more sophisticated attacks.
Adding to the complexity is the presence of shadow systems housing sensitive or confidential data lurking in higher education for some 40 years. Among the relevant examples are a power user downloading student fiscal data onto a personal hard drive, a researcher locally storing sensitive data, and an office which has deployed an information system for which the IT department does not even know exists.
Consider the dark possibilities if a user innocently exposes such data to a GenAI model.
This all means answers to traditional questions like ‘Where is the data actually stored and what security measures exist for that data both at rest and in transit?’ and ‘How robust are the tools restricting data access?’ deserve more scrutiny than ever.
Perhaps more importantly, the question of ‘Does my executive who listened to AI hype at a conference last week and is now eager to buy an AI-infused product fully grasp the potential risk?’ At one time, it may have taken a concerning cybersecurity audit finding to catch the attention of the institution’s board or cabinet. But these can no longer those times and executive recognition of AI risk up front is critical.
Executive leadership should prioritize the creation of practical, common-sense policies governing AI usage. Tactical and operational leadership needs empowered to keep those policies up to date and to make key decisions on tools and techniques to help keep data safe. They can then build appropriate procedures, guidelines, standards, FAQs, and best practices so users can effectively work in an emerging AI world.
Bill Balint is the owner of Haven Hill Services LLC, contracted as TriVigil’s Advisory CIO for Education.