HERDI

Lee Lambert, Artificial Intelligence, and PASI

students
Image by Pixabay

This spring, we wrapped up Jim Catanzaro’s blog series on artificial intelligence (AI) and higher education. During the ACUE webinar series, Jim gave shout outs to several community college leaders. Some webinar speakers were also HERDI advisory board members! Today, we want to share the work of Lee Lambert. Lee, a HERDI advisory board veteran of many years, is Chancellor of the Foothill-De Anza Community College District in California. He has been following the conversation on AI and higher education and drafting his own approach to aligning this new category of technology with higher ed’s mission. Lee started by putting together a draft framework that eventually morphed into a blueprint, which morphed into a presentation!  

An AI presentation at PASI  

In July 2024, Lee presented a Cengage sponsored session at AACC’s Presidents Academy Summer Institute (PASI). The session focused on the AI blueprint he developed with Reetika Dhawan, Chief Executive Officer of Entrepreneurial College and Vice President of Workforce & Healthcare at Arizona Western College. The blueprint was not specifically developed for Lee’s district. That said, he told us that parts of it were used to guide his thinking about AI at Foothill-De Anza. Lee’s vision and objectives for this work are as follows:  

“Align AI initiatives directly with the institution’s mission, such as enhancing student success, equity, educational excellence, and fostering a diverse and inclusive learning environment.”  

AI in CA   

During a recent phone call, Lee shared the inspiration for his focus on AI at PASI. The California Community Colleges have been hosting statewide meetings on AI and higher education. In the spring of 2024, the Chancellor’s Office supported a statewide survey developed and administered by the California Community Colleges Workforce and Economic Development Regional Consortia. Those findings and others are included in the July 2024 report to the Board of Governors called Generative AI and the Future of Teaching and Learning. Also, the report includes a section on the new Digital Center for Innovation, Transformation and Equity. This is a statewide center located in Lee’s Foothill-De Anza district.  

AI in the Foothill-De Anza district  

With growing momentum around AI and higher education, Lee rolled out the first major phase of the Foothill-De Anza AI program. In this phase, Lee and his team established a district wide work group, a district wide professional development AI series led by faculty, and a collaboration with N2N’s Lightleap AI to beta test a fraud detection tool.  

Questions leaders should be asking about AI at their institution  

As other higher ed leaders consider their strategic approach to AI, Lee recommends that they think about AI in the context of three major categories:  human-centered issues, infrastructure, and physical facilities. In each area, there are questions leaders should be asking.

Human-centered issues – Do the principles used to develop AI policy center on the needs of students, employees and the community? Do they include sections on privacy and bias? Do we have existing policies that need review and a refresh based upon our new AI reality?  

Infrastructure – What is the present state of the institution’s technology infrastructure? How about energy-efficiency, cooling, and power? AI will put greater demands on the energy consumption of facilities. Is the institution prepped to optimize its infrastructure to support AI capabilities?   

Physical facilities – What is the capacity of existing transformers and switchboxes to meet the demands of the use of these advanced AI technologies at the institution?  

Returning to the all-important human-centered issues, there’s more to share on the intersection of AI, ethics, bias, and other key topics of concern to educators. 

AI, ethics, bias, discrimination and plagiarism   

Lee’s final thoughts on the topic of AI and higher ed focus on the much-discussed topics of ethics, bias, discrimination and plagiarism. He shares…   

“We are reacting to AI as if it has bias and can create discrimination. The context is amplifying our human frailties. We need to address the human side of it and ensure algorithms are created with diversity in mind and trained on high-quality data. Don’t fault the tech. Plagiarism is the same. AI is not the problem. AI that is not human-centered is the problem.”  

Lee sent us a few sources he’s been reviewing on the topics of cheating and AI. The first is from the International Center for Academic Integrity. This 2020 research, by ICAI founder Dr. Donald McCabe, shows how prevalent cheating is in colleges, with 32% of undergraduate respondents saying they had cheated on exams. And this 2024 Atlantic article shows how Arizona State University’s writing program director is working to effectively incorporate the use of AI in English courses while coping with its “temptations”. 

Even with these sizeable concerns, Lee feels that it’s better to be ahead of the curve, testing and evaluating these new technologies, and investing in the future of the district. The hope is that, in the end, the AI work will align with the mission of Foothill-De Anza. That it will support its goal to enhance student success, equity, educational excellence, and foster a diverse and inclusive learning environment.  

Do you have stories to share about the work your institution, district or system is doing with AI? Yes? Then please reach out to us at herdi@herdi.org. We’d love to hear what you and your team are working on.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top