In 2024, artificial intelligence dominated conversations across the globe, from copyright lawsuits against AI art generators to developing legislation for artificial intelligence regulation. As we enter this new frontier of technological advancement and intelligence, George Mason is positioning itself to be a pioneer in the field.
On this episode of Access to Excellence, President Gregory Washington and George Mason’s inaugural vice president and chief AI officer Amarda Shehu discuss the research possibilities of AI and the role of higher education in AI training and development.
Read the Transcript
Intro (00:04):
Trailblazers in research; innovators in technology; and those who simply have a good story: all make up the fabric that is George Mason University. We're taking on the grand challenges that face our students, graduates, and higher education is our mission and our passion. Hosted by Mason President Gregory Washington, this is the Access to Excellence podcast.
President Gregory Washington (00:26):
In 2024, artificial intelligence dominated conversations across the globe, questioning the environmental impacts of ChatGPT, copyright laws against AI art generators, and developing legislation for artificial intelligence regulation. As we enter this new frontier of technological advancement and intelligence, George Mason is positioning itself to be a pioneer in the field. Our guest today is an exemplar of that. Professor Amarda Shehu is George Mason's inaugural vice president and chief AI officer. She is also a professor in the Department of Computer Science in the College of Engineering and Computing, where she also serves as an associate dean for AI innovation. She is recognized in various scientific communities for her research and thought leadership in artificial intelligence, and she is a member of multiple task forces advancing AI insecurity, AI standards, and AI governance and policy. Amarda, welcome to the show.
Amarda Shehu (01:39):
Happy to be here.
President Gregory Washington (01:42):
Well, this topic is one that has essentially dominated scientific and engineering fields and the total public at large over the last year and a half or so since the emergence, the public emergence of ChatGPT.
Amarda Shehu (01:59):
Right.
President Gregory Washington (01:59):
And so George Mason is addressing some of the world's most pressing issues. And your new focus as our chief AI officer will be integrated in many ways to deal with the challenges and the opportunities associated with artificial intelligence. But before we get to that, how do you see AI efforts contributing to the major solution to the grand challenges that we're facing today, like public health or being able to help us become more climate resilient?
Amarda Shehu (02:36):
Right, so that's a good question. What happened to the old days, right? When you could work in a lab and nobody knew what you were working on? <laugh>. So I'm a very levelheaded AI researcher, as my students will tell you, but even I cannot really contain my enthusiasm on the opportunities and where we're going and the breakthroughs that we are making. I have a lot of examples for you, but I'm gonna try to keep it short, so just stop me at any time. Huge opportunities in the health space. Okay? And even if I just narrow it, like really focus it in on a specific sub-sector: new drugs. We now have AI discovered drugs that are making their way down clinical trials, right? And there are even studies now that show that these AI discovered drugs have a higher likelihood of surviving those really difficult complex steps that take a drug from the idea to basically putting it out in a market.
Amarda Shehu (03:30):
I know a lot about this space because my lab was one of the first, and we continue to develop AI methods for what we call property controlled generation of small molecules. So molecules that can serve as drug compounds, but that you really have to correct for a lot of things. Like do they survive in the blood? Do they go down the brain barrier? We have new biologics. Okay. Did you follow the Nobel Laureate winners this year? There was like huge press on them.
President Gregory Washington (03:57):
Yeah, yeah, yeah.
Amarda Shehu (03:58):
Yeah. David Baker, he just gave his Nobel speech, I think yesterday, and he's recognized in the field for protein engineering and protein design. So now we have AI pipelines that are developing new proteins, new enzymes, right? So think you're a mechanical engineer. Think about new catalysis processes. They go beyond drugs to new materials. Imagine the opportunities, materials that can capture carbon dioxide that you can put in the soil and clean the soil or clean the oceans.
Amarda Shehu (04:27):
New materials for more resilient structures, better bridges, all the way to quantum materials. And we used to talk for a long time in my community on automated labs. So labs where you can think you have the robots doing all the experiments, but now we realize that that was a very unambitious way of framing what an AI scientist can be. Because now we're talking about labs that are not only operating, but ready to be scaled that go from, you know, ideation generation to synthesis, to testing. And there's just so much more that's coming in the health space. There is a team here of Mason researchers. What they're doing is they're looking at how do we help folks with opioid addiction, right? There are these very classic frameworks for how to do interviews and how to motivate folks to stick to sort of specific regimens and how they're utilizing AI to personalize these motivational interviews.
Amarda Shehu (05:21):
I think you mentioned challenges, like grand challenges. Well, climate is a grand challenge of our time. So what about, for instance, climate forecasting? Climate resilience. Now we have really accurate AI methodologies that can forecast weather. We have a team here at Mason that feeds satellite data into AI algorithms to better predict storm surges, right? So you can help our communities to be more better prepared. Uh, we have another team that is thinking, okay, how do we communicate to communities better? Right? How do we help with disaster preparedness? They just received, um, a $1 million grant from NIST to advance AI for disaster preparedness. We have another team that's using AI to predict snow accumulation and melt. It's a collaboration between the College of Engineering and Computing and the College of Science. What about environmental conservation? We have a faculty here in the College of Science. He's using AI powered systems to monitor ecosystems like the Amazon and track the wildlife population. Amazing stuff. I have a lot more.
President Gregory Washington (06:21):
No, this is really, really good stuff. I think the real challenge for many of us, especially those of us who want to harness it for purposes in higher education, is the speed in which the technology is moving. Right? You know, in our meetings and our meetings with our faculty and our individual meetings, we've talked about how essential it is to not just harness the technology, but to build the right ethical guardrails to protect vulnerable populations, actually to protect the population overall, right? From outcomes associated with AI. So what is your approach to making sure that ethics, societal impact, and good governance our front and center in our AI work?
Amarda Shehu (07:12):
Yeah, I, I fully understand the speed challenge I tell colleagues who are kind of relating to me, man, there's like a new article every day. How do I keep up with it? And I tell them, well, there's a research paper like every five minutes, it seems to me in my field. So I understand the challenge of speed, but we're a little bit ahead of the game here, I think in terms of thinking about those ethical guardrails. And I might say we're a little bit ahead of the game compared not only to other universities in Virginia, but even beyond. So let me just give you a couple of examples. Mason, for instance, we are active participants in the AI Safety Institute Consortium that's run by NIST under the charge from the Department of Commerce. And we're the only university in Virginia there. And as part of, you know, being a participant, we are outlining better understanding, what are the capabilities of AI systems?
Amarda Shehu (07:59):
Can we forecast them? And as we think about outcomes and capabilities, can we think ahead of what are those ethical guardrails, right? How can you define them? And more importantly, how do you translate them from concepts to actual metrics, right? And functions that you can put into these systems so that they do the right thing. How we understand the right thing, right? In alignment with our values and with our principles. Here at Mason, we also work very closely with SCHEV, the State Council of Higher Education in Virginia. And we are really trying to better kinda piece out the governor's charge, for instance, on, it's called executive order number 30, where the governor is worried about AI safety and thinking about what does it mean to integrate AI methodologies in education, right? What are those guardrails for our instructors, for our students? And they go beyond just data security and data privacy, right?
Amarda Shehu (08:51):
So think agency, think making sure that those tools are not replacing sort of very key developmental skills. So in fact, we are leading in this space. We have an AI in Education summit coming up in May where we are bringing together higher eds, community colleges, K-12s across Virginia to really outline, develop, and implement standards. Let me give you sort of very, very three quick high level frameworks, right? How we are thoughtfully proceeding on those ethical guardrails. First: governor's framework, right? So things may be moving very fast, but you wanna go back to what are your values? What are your principles here at the university? And so together with a lot of representatives from the colleges, faculty, staff, students, were developing comprehensive policies that cover data, privacy, security, ethical use, transparency, accountability agency and more. And we're not thinking in the abstract, all right? These are not sort of boring pieces of text that you write, and nobody really reads or understands what they mean.
Amarda Shehu (09:48):
We are thinking, what does this mean for you as an instructor, right? What does this mean for you as a student? What does this mean for you as a staff member, a researcher? So we're thinking of all the stakeholders, um, integrating these guardrails in the curriculum and in the research, right? You need to go beyond just saying the right things, but doing the right things. So again, that's what I meant by saying we're actually ahead of the game. We have an undergraduate minor in ethics and AI, because what we wanna do is we wanna open this up to all Mason students, right? We want students from humanities, from social sciences, from education, from business, policy, wherever they are, they can come and they can take this minor and they can learn not only kind of a better understanding of AI, but also better understanding what does it mean to have ethical ai, what is safe and responsible AI.
Amarda Shehu (10:33):
We also have specifically a responsible certificate for our graduate students where we are teaching them about risk frameworks and then how to make sure wherever they going, companies, or whether they become clerks or staff or senators, right? Wherever they end up, how do they make sure that they incorporate those risk frameworks, whether it is in the way they're critiquing, right? Interrogating AI systems. So, or even actively participating in development. But I think the bigger umbrella is how do you foster this culture, right, of responsible innovation on campus? And I'm gonna come back and circle now to research. And the reason I wanna circle back on research is because as, as folks perceive, it's a very fast moving space, but it's a space where we are just discovering, uh, somehow the outcomes, right? So, and we wanna make sure, okay, what does it mean to have agency? How do you develop systems that don't take agency away from human beings? This is an active area of research. All of these are, so we're incentivizing our faculty to come together across the different colleges and together advance, you know, research that tells us, okay, how do we make sure that we have ethical AI? Or how do we make sure that this is interpretable or transparent? And we can understand what is it that we are doing? If you're using these systems in decision making.
President Gregory Washington (11:48):
You've brought up a whole host of ethical questions,
Amarda Shehu (11:51):
Right? Yeah.
President Gregory Washington (11:52):
Right? And, and clearly you are thinking about 'em. Our team is thinking about 'em, right?
Amarda Shehu (11:59):
Right.
President Gregory Washington (11:59):
So talk to me about what that looks like in the classroom, right? How can we make sure our students are equipped to handle the ethical questions and societal challenges that come with AI? What are we doing to ensure that in our ethics of AI curriculum?
Amarda Shehu (12:20):
So first I wanna take a little bit, you know, a few steps back because we do want our students to think about, you know, the ethical aspects of AI, but you know, you can't ask the real questions and you can't do the right things if you don't understand, and you don't have a deeper understanding of technologies, right? So we don't want students just to get their information from articles. We want our students to really understand what is artificial intelligence, right? What are methodologies? And here I'm not thinking about say, computer science students or engineering students. I'm thinking very broadly any student at a public university. We talked for a long time in computer science, I've been around a few years, we talked so much about opening it up, right? Opening up computing, opening up sort of computing principles and analytical thinking to all other students. But we just couldn't figure out how to do this without forcing students to go through, you know, you gotta take Python 101 and then you gotta take Python 201, and then you gotta take Java, right? So the model was always, well, first come and learn, you know, what, and become and computer what
President Gregory Washington (13:29):
First you gotta learn how to program.
Amarda Shehu (13:30):
Yeah. You gotta learn how to program, then...
President Gregory Washington (13:32):
You gotta learn the basics of AI. And then, and then you gotta program the AI...
Amarda Shehu (13:36):
Right? Right, right. So, and, but we would start with like, okay, sort these numbers. And you know, it's not for everybody. I always tell folks, I survived my first years as a computer undergraduate because most of the stuff that I was doing was okay, but I just couldn't see like the big picture. Why is it that I'm doing this, right? What is the real interesting thing? So what really excites me now is that we have the opportunity to teach students the bigger things without forcing them, you know, to go to this pipeline, this cookie card or model, right? So they don't have to learn sort of the inner things about coding. Now we're talking about, uh, non-coding frameworks, right? We can teach student how to build AI agents by operating on top of this platforms that are basically sort of point and click and put together. So this is at this level---
President Gregory Washington (14:24):
Now what platforms? Now what platforms are those, right?
Amarda Shehu (14:28):
So there are a lot out there in industry. A lot of the companies are proceeding in this space. Of course, OpenAI is a big player, but even others, you know, um, Anthropic, um, Microsoft, Amazon, they're all going in this space. They're all going into AI.
President Gregory Washington (14:43):
So these are, these are, these are no code or low code type frameworks.
Amarda Shehu (14:48):
Yep.
President Gregory Washington (14:49):
There's a code underneath that's being, being generated by the bot.
Amarda Shehu (14:52):
Absolutely. Yeah.
President Gregory Washington (14:53):
And you're, and you're setting a set of high level instructions, right? This is just for the community out there.
Amarda Shehu (14:57):
Yeah. You're operating at the top.
President Gregory Washington (14:59):
The challenge with that is you don't necessarily know or understand if the code that's coming underneath is doing exactly what you state to that code in English.
Amarda Shehu (15:14):
Yeah. Those are the skills, right? If you teach a certain way, then you can create the spaces to really get into what matters, right? So if you allow students to get, I call it do it yourself, right? DIY I'm really excited about this. DIY AI. So if you teach the students, here is how you can build an AI Agent4. And what could that do? Okay? Let's say you're looking for a job. Alright? So I wanna figure out what are among the job descriptions out there, what are those that are best aligned with my cv? You can build an AI agent that says, Amarda looking for a job. Okay? I'm not looking for a job. But you can build an AI agent for that. Now, the real question is, is it doing what you're doing? Is it giving you the right information? And more importantly, how can you spot it when it's not giving you the right information?
Amarda Shehu (16:05):
Right? Because it may be very subtle. And so that is what I would call AI literacy, right? And so that's what we wanna focus on for all our students, to give them the right capabilities so that they can understand these nuances and be not just better informed users, but better informed builders. Right? Now I wanna mention one more thing. It is, the opportunities in this space are big. Not only because sort of at the level where you operate, but because when you open these things up to students that are coming, let's say from a philosophy background, right? Or in communications, then those students can ask some questions that maybe an engineering student would not, right? Because they're going through their program, whether it's policy, ethics, or English, humanities, whatever it is that they're trying to do, or health, right? Say public health. And they can spot, okay, it's not doing what I wanna do, right? Or I wanna do something bigger and they can really come up with new problem spaces, not just new solutions. And then the right questions. How do we interrogate? So it's a win-win because you are advancing education, but you're even doing a little bit more. You are advancing innovation. I would say.
President Gregory Washington (17:16):
What kinds of support will we offer the region in this regard, right? So you have our students, and I get it. They're gonna learn the basics of how AI works, and then on top of that, you're gonna learn tools such that they ensure that the AI is doing what it's actually intended to do. That's the literacy piece. Yes. And so now you have a cohort of students who are basically equipped to go out and tackle major problems with AI.
Amarda Shehu (17:49):
Yes.
President Gregory Washington (17:50):
That being said, we have a whole community around us, right? And at last check the fastest growing AI community relative to job requisitions is the Washington D.C. Metro area.
Amarda Shehu (18:07):
Yes, it is.
President Gregory Washington (18:08):
It's second only to Silicon Valley.
Amarda Shehu (18:10):
Right, we're number two.
President Gregory Washington (18:11):
And, and it is razor thin, the delta between Silicon Valley and Washington D.C. and then there's a big drop off when you get to places like Austin, Texas and, and many of these other major cities that are hubs of innovation and technology. So given that we have to also figure out how do we engage the broader community from an AI perspective. So talk a little bit about your thinking on that.
Amarda Shehu (18:40):
Yeah. So we live in a really interesting region in the nation, and I always tell my students that you are so privileged to be, you know, next to the government and next to you know, all sorts of agencies here. But what may not be appreciated, as you said, is the whole industry, right, that supports this region. So we are a public university, and as you know, our first tangible product is a skilled workforce for the region, right? And then the nation and the world. And I often hear you say that the majority of our graduates stay in Virginia. So that's a great thing because that means you're uniquely poised to offer an AI skilled workforce to Virginia. I went through this exercise this past year of designing a new educational program, a new master's program, and I got sort of this very firsthand view of how many AI and AI-related job descriptions are out there in our region, right?
Amarda Shehu (19:35):
There are a lot of industry here that offer public sector technologies, right? They're developing systems for the government where it's local, state, federal government, right? But there's also, then there are startups in this region too. There's a lot of very diverse industry. So what we really can offer this region is an AI-skilled workforce. We understand the region, okay? We understand the needs because we're next to the government. We talk to our federal tech providers where we talk to companies, but we also talk to the DOD, right? We talk to the Department of Health, we talk to a lot of agencies. So we see, you know, what the needs are, and we see also what the capabilities are in the region. And so this is a very unique view that we have because it informs us on what is it that we need in our educational programs to prepare the students, right?
Amarda Shehu (20:25):
Whether they want to join the government or whether they want to go in industry in this region, there are just tons of opportunities. So we've made a lot of headway in this space. We already have a lot of our graduates going in this region, but we also have a lot of educational programs, either operating, I mentioned a couple of those before, and new ones in the works now. Something really unique I think that will open up opportunities, right? And will also open up opportunities for industry and government in ways that they haven't thought of before, is we can create new educational programs that are not just, you know, for engineering students that are not just say, housed in, let's say, a college of engineering or in a school of computing, but we can create new educational programs that bring all the colleges together. So everybody instance talk about AI ethicists, but did you know that there are no programs? Like, if you think about it, it's maybe you get a PhD. We can do this, we can create new educational programs that prepare our students right after an undergrad to have this understanding and go and, you know, help the government as they think about, say, procurement.
President Gregory Washington (21:35):
Now we are preparing a program in AI ethics. Is that accurate?
Amarda Shehu (21:39):
We do.
President Gregory Washington (21:39):
So, and so where are we with in that process?
Amarda Shehu (21:41):
Yes. So, so we first tested the waters with an undergraduate minor, and now we are talking about going to the higher levels, right? So going to majors, going to masters, but it's not even just AI ethics, right? You open it up. There's a lot more in this space because then you think about, okay, what does it mean to develop things for the society to have societal impact, right? So there's a lot more thematically under the umbrella of AI and society. There's a lot under, say, AI and health. The College of Public Health is creating concentrations. And so we are first testing the waters, but we are just proceeding now to bachelors and to masters. And, um, as I say, to uh, folks, stay tuned. There's gonna be very new things coming out of Mason that not only will serve the needs of the region, but I dare say we're gonna go a little bit beyond that because we're going to tell industry, well, you didn't think about this, but look, we have thought about it. And here is, you know, where you should be heading and here is how you should be doing things.
President Gregory Washington (22:43):
It's good that you bring that up. One of our strengths is our interdisciplinary approach to research, bringing our faculty and experts together to tackle complex problems. How do you see us harnessing this collaborative approach while still maintaining the rapid innovation that we've had to get to, uh, AI solutions?
Amarda Shehu (23:06):
Yeah, so we actually are very collaborative at Mason. I think it's part of us being young, really, right? And being ambitious and trying to run very fast and, you know, catch up and even go beyond other universities in a really short amount of time. We really don't have many silos as you may find in other universities, but we don't just take it for granted. The fact that our faculty, you know, have that posture of wanting to collaborate with others. We actually are creating the structures, if you will, the infrastructure and the incentives to allow faculty and students to collaborate with one another. I wanna give you a couple of examples. You may already be familiar with them, but just, you know, for whoever is listening to the podcast. We have three transdisciplinary institutes, okay? And those institutes are constantly coming up with ways of bringing faculty and students from the different colleges together in a room and outlining new ideas and new problem spaces, as I call them.
Amarda Shehu (24:05):
So in 2023, we had an AI innovation summit where we had, I think 150 faculty that came from all the colleges, and they were organized around themes of research and educational opportunities that they wanted to develop, right? So we had an AI for society, AI in education. We had AI and health, we had a lot of the sort of special interest groups, teams of faculty and students that went out of that symposium with ideas and, you know, sort of shared goals that they could go after. And some of the ideas for even the educational programs. But we also, we go a little bit further than that. We also incentivize faculty and students. We have a, a wonderful program here housed under the Institute for Digital Innovation. It's called the Predoctoral Fellowship Program. It's a very different program because it goes and it tells PhD students, you have agency, you are embedded in society, there are things that matter to you, right?
Amarda Shehu (25:00):
There are great challenges that you perceive about the world in which you are living and you're gonna live in and go and formulate, you know, a problem that you wanna work on. But it has to be interdisciplinary, inherently interdisciplinary. And so the students really take ownership. They're given a three year fellowship. So it gives them that breathing room to develop complex ideas that are bringing together faculty across the colleges and giving the student, you know, the mentorship and the expertise that they need. We have a public private partnership faculty fellowship. Okay? It's a mouthful, but it's a P3 Faculty Fellowship. But what it does is it tells our faculty to understand your, your lab, go outside, find an industry partner in the region, or even in the nation for that matter, find a problem that they're struggling with, but it also has--it can't be niche--
Amarda Shehu (25:47):
It also has to be a problem that has high societal impact and high intellectual merit, right? So we're doing a lot of these to incentivize faculty and students to collaborate. So I like to think of them as seeds with strings, right? So we're moving fast, but we wanna make sure that as we're moving fast, we also have accountability. Right? What are we doing? What are the ideas that you're developing and, and what are they bringing? What are the societal impacts? We are incentivizing faculty and students, but we're asking them to think big and to do big things.
President Gregory Washington (26:18):
So there are a host of programs and a host of initiatives that connect us to the broader population. Industry and the like. Alright. Talk a little bit about K-12, non-governmental organizations, and other entities outside of those who would have a vested interest in AI for making money or advancing a field. Right? Talk to us about that.
Amarda Shehu (26:50):
Right? Yeah. So there is a great interest in K-12s, in community colleges, and actually some community colleges are already running ahead and they're thinking about all kinds of, you know, certificates and expertise that they can give to their graduates in this space, in the AI space. But there's a lot of, I would say desire too, but not knowing how to, right? And there's just so much, there are not just even the major tech, but small companies that are experimenting with say, new chat bots for personalized education for K-12. That space is really taking off. But the real questions in K-12 is, well, I may like the capabilities, I may like to, you know, give my students that extra help that they may need right? At home. For instance, imagine a student whose parents, you know, for their parents, English is a second language. And so that student might not get the support at home for any concept that they didn't get in class, right? They may struggle a little bit with homework. There are huge opportunities to help students to level up with these technologies, but the questions among K-12s are, can we do this safely? Right? So what happens to the data that the students are putting in, they're interacting, right?
President Gregory Washington (28:05):
But, but, but, but, but back up for a minute. We're acting as if if we don't do this, the young people won't get it.
Amarda Shehu (28:11):
Oh, they're already doing it <laugh>.
President Gregory Washington (28:13):
Yeah, yeah, yeah. And so that's, I think that's misguided because the reality is just because you're not teaching it, don't assume that A) there are more nefarious entities out in the community that will help these young people learn it, and B) don't get in front of their own individual curiosity, right? If this is something that they want to learn, right? There are so many tools available online and through YouTube and other mechanisms. You could be self-taught. And so
Amarda Shehu (28:55):
They are.
President Gregory Washington (28:56):
My, my challenge is, is that oftentimes we're entering into these spaces behind the ones that we are responsible to teach. Meaning they've already not only adopted, especially when it comes to utilizing the technology, they've already adopted it, they're using it. Then here you come trying to teach them how to do something that they've been actually doing for months.
Amarda Shehu (29:23):
Oh, trust me, we're not <laugh>. I wanna give you a, you reminded me of an interesting thing. So last was it, I can't quite recall. I think it was a few months back actually. I've held several events with students, okay. At different levels. And I held an event with masters, PhDs and some senior students in the College of Engineering. And I told them, it's a safe space. I just wanna learn, we just wanna learn how you're using these tools. We know you're using them. We just wanna learn use cases from you and, you know, no faculty were allowed, you know, I said it's just me. It's just me and maybe some of my students listening in. And I was blown away. Okay? I, I learned from them on not only what kind of tools are out there, but how to use them for things I never thought about.
Amarda Shehu (30:11):
So they're already ahead of the game. The kids, as we say, they're already using these technologies. The questions that I was talking about earlier are questions posed from instructors, right? In K-12s. Because very often they have to comply with very specific, whether they're state regulations or coming from the Department of Education, right? So that is the challenge. It's not so much about, you know, what the kid is doing at their own time. It's in the classroom. If I want to embed these technologies in the classroom, how do I make sure that I'm compliant, right? Not at just data privacy and security, but I'm compliant with whatever regulations there may be in Virginia or in North Carolina. And sometimes they're a little bit different. So that is what we're trying to help K- 12s with. We're trying to better understand the space as well, right? There's some education for us sitting here in a public university, but we're trying to understand from them, okay, what are those regulations and how do you map those regulations into specific sort of things that you look for in this technology?
Amarda Shehu (31:10):
So we're doing in some sense a matching, but it's also an education, right? That we are educating the instructors what tools already exist, what they can do with these tools. And they're also educating us in terms of sort of the borders in which they have to operate. So those are the conversations we're having. And there's a lot of, I think, um, I'm really excited about the summit in May because that's where we will all sit in the same space and talk to one another, right? And educate one another. And there's gonna be training there too. We're also gonna be training some of these instructors in K-12 so that they have also themselves a deeper understanding.
President Gregory Washington (31:43):
So right now, the big discussion that we are still dealing with and, and the big discussion quite frankly that's happening nationally, is the discussion around how much of the technology is actually, how much do you utilize for the benefit of you getting a task done...
Amarda Shehu (32:07):
Mm-hmm <affirmative>.
President Gregory Washington (32:08):
And how much of that is you personally, let me state it a little clearer. There are articles of presidents and other leaders on campuses who've gotten in trouble because they want to send a memo to campus on a specific issue, right? They consult ChatGPT, they say, here write a memo to answer this particular broad base issue. And it could be any issue. The ChatGPT produces it, they may spruce it up a little bit, put their signature on it, and boom, it goes out to their broad base institutional communities.
Amarda Shehu (32:52):
Okay? Yeah. Don't do that. Don't do that.
President Gregory Washington (32:54):
No, no, no.
Amarda Shehu (32:55):
Don't ChatGPT for that <laugh>.
President Gregory Washington (32:56):
Well, what happens is, undoubtedly somebody, some itinerant person checks it and then when they check it, they say, well, wait a minute. And then you hear, oh, this was generated by ChatGPT. There's absolutely no way a leader in our organization should be using a bot to give us feedback on how we should operate. And so my question to that is, well, if the information is right, why not? Right? And so, gimme your thought. I, 'cause I got, I got two or three more after this one. Gimme your thought on that one specifically.
Amarda Shehu (33:33):
I'll be, I'll be quick. So first of all, anybody that, that tells you that I check this and this is AI generated? Uh, nope. They actually cannot do that. Okay. Uh, I've done a couple of projects with students, but also if you hear or or you read, uh, Chronicles of Higher Ed has had articles on this, the false positives are crazy.
President Gregory Washington (33:55):
Oh, so, so sometimes a person might not have even used AI.
Amarda Shehu (33:58):
No, actually you should not. The false positives are absolutely crazy. And there are examples of kids, you know, submitting their own work, for instance, in high school and a teacher saying this is AI generated. And there are articles that say, hold on, they, you penalize with this. You penalize kids that are, say for instance on the autism spectrum, right? They have a very structured way in which they write essays, they read a little bit different than others, but you really cannot. I was actually giving our middle schooler and she was like, mom, I'm so afraid. What if my teacher says this is AI generated, I shouldn't use big things in this. I said, you generate big things and you give this article to your teacher that says, anybody that is telling you that they can spot AI generated text, uh, don't trust them.
President Gregory Washington (34:39):
Okay. Well this is good because that leads to my second question. So now you're a student and you have an a writing assignment. Write an article on the topic of wearing burkas in public. Right? And so you go online, the student pulls up ChatGPT or Anthropic or any of these others and actually ask it the question.
Amarda Shehu (35:09):
Mm-hmm <affirmative>.
President Gregory Washington (35:10):
And then the bot gives it back a very thoughtful reply and response. Now that student has one of two choices. They can cut and copy that, drop it into their article, submit it, and say that they're done. Right? Or they can use it as a tool. Pull references from that, use it as a way to get more in depth understanding and use it as a start to developing their own thoughts on the topic.
Amarda Shehu (35:48):
Yeah.
President Gregory Washington (35:49):
Talk to me about the right way. And the wrong way.
Amarda Shehu (35:53):
The first that you mentioned is the wrong way. Okay? If you just cut and paste or copy and paste without attribution, right? That is going against academic integrity, right? You don't do that with any piece that you find on the web. So it's exactly the same thing, right? You don't do that. You don't take from a book copy and paste. So you shouldn't take from ChatGPT or Claude or whatever it is that you're using and copy and paste and pass it as your own work. Alright? So that's the key operating term: passing it. The trouble is you're passing it as your own work. Now, if you attribute it, right, and you say you, you reference it, that is okay. I know that some instructors may not be okay with it, but I think some of it is because, you know, lack of understanding that it is the same in some sense of doing some research, right?
Amarda Shehu (36:39):
And saying, okay, this is the consensus, but the real, you know, mode in which students should be operating--and that's, I think also, uh, sort of a charge for instructors--is to go now beneath the surface and you pick that example, right? Burkas in public and then ask the question. But what do you think? Right? You may wanna be informed, right? In terms of, okay, what does this mean? Where has this thing happened? What controversies have been possible out there? Because you may wanna look at the problem for many different angles that, you know, you may not think about all of those because of where you live or you know, the community you're in. You just haven't thought about those issues. So using ChatGPT or, or going and looking at articles online, it's a way of educating yourself. But the second way in which you said then using that as a way to go deeper, right? An inch deeper or an inch beneath the surface and say, okay, here is now what I think, or here is where I'm coming in. Or maybe even render judgment in some way, right? So I'm thinking of all these things that are out there, this makes more sense for these reasons based on my experiences on all my thinking. That's where you should be heading. That's the right way.
President Gregory Washington (37:44):
Okay. That's my thought. Here's number three.
Amarda Shehu (37:46):
Mm-hmm <affirmative>.
President Gregory Washington (37:47):
There have been a number of high profile firings of university presidents because of plagiarism, which was quote unquote unearthed by checkers. These software tools that are designed to map plagiarism and to catch plagiarism of a person's work. Here's the challenge. Oftentimes it's looking 5, 10, 20 years, 25 years back, I know a president who's dealing with this now for a paper that was written 30 years ago. Right? Okay. So 30 years ago we didn't have the tools to help check. So if you're working with a graduate student and you were working with that student 20 years ago, and let's say that student who's learning might have took too much liberty with quoting or not quoting and utilizing some text that they found in a related work somewhere as they were. As a professor. You, you may read their work, but you don't necessarily know every single one of those references.
President Gregory Washington (39:13):
It would be time consuming at some point. You trust your student that the student has given you work that is not lifted and not plagiarized.
Amarda Shehu (39:23):
Yes.
President Gregory Washington (39:25):
And it's not, I don't know that it's fair to use tools that have been developed now to check stuff that was written 20, 15, 20, 30 years ago. 'cause we didn't have those tools. If we had those tools, then.
Amarda Shehu (39:43):
We would, you would check. Right? We could check.
President Gregory Washington (39:45):
And now faculty are doing those checks with these tools, right?
Amarda Shehu (39:49):
Hmm.
President Gregory Washington (39:50):
But you didn't have those tools before. And the other thing is, even with the tools now, what we're finding is that oftentimes two people who have a similar background will write the same thing. The same thing, explain the same way.
Amarda Shehu (40:06):
Yeah.
President Gregory Washington (40:07):
To explain a phenomena.
Amarda Shehu (40:08):
Exactly.
President Gregory Washington (40:09):
It doesn't necessarily mean that one copied another. Right. But it does mean that whoever came first is gonna be attributed with the discovery.
Amarda Shehu (40:17):
Yeah. I mean that's convergent evolution, right?
President Gregory Washington (40:19):
Right. And so what we're finding with the tools now is you can come up with something, you write it, you run a check, or you say, oh, well, such and such and such such found out the same thing two months ago and they published it in this paper so we can't put it forward. Doesn't mean you were plagiarizing, it just mean you were late. They got to the discovery before you did. And so
Amarda Shehu (40:39):
Or or they were published earlier. Right, right. Because it depends.
President Gregory Washington (40:43):
That's right. That's exactly, that's exactly the point I'm making. And what we would do back in the day is you kept very accurate lab books, right? And, and you would write your lab books in pen and you would keep dates on your lab books so that, so if something happened like that, you can go back to your lab book and say, no, no, no. Even though they published it before me, I actually discovered it on this date.
Amarda Shehu (41:06):
And here's the evidence...
President Gregory Washington (41:07):
And here's the proof that I did so. So the point that I'm making here is we should not be utilizing these tools to go back 15, 20 years to basically state that a person is plagiarizing. Because A) the nature of the work we do as scientists is such that two people can come up with the same answer at the same time, and one not actually know that the other has made the same discovery.
Amarda Shehu (41:44):
Yeah.
President Gregory Washington (41:45):
And with the large number of publications that it's not like we have five or six publications and any given area, there literally can be hundreds of publications in that area. And you just, it's not fathomable without an electronic tool. It, it's just not realistic for you to be able to read all of those to know what everybody else is saying on a, on a topic. And you definitely can't do it. Even 20 years ago, there were hundreds of different journals on a singular topic. To me, I think this thing has gone way overboard. And, um, yeah, we are penalizing people and making them look like criminals for what could be oversights, what could be sloppy work, or what could be just, they're just late.
Amarda Shehu (42:33):
Well, yeah, I mean, if I can sort of comment on one thread of it, I tell my students always be worried when you're looking, you have a hammer and you're trying to find things to hit with that hammer. So taking--
President Gregory Washington (42:46):
Everything looks like a nail.
Amarda Shehu (42:47):
Yeah. So taking something and trying to unearth or trying to find things that look like, you know, they fit for the tool that you have, that's a little bit scary to me, I would say. So what you really wanna do is urge folks, I would say trust people and try to understand something from all angles. Think about all these other things, right? We don't have all this access to information. You could have come up with the same idea, or sometimes in our field you can come up with the same way in which you formulate a problem, right? And because it's the optimal way, right? You wanna feed it in three sentences in that first paragraph in the introduction, and you go to the conferences and you talk to people around, that's what I mean by convergent evolution. And so then you start talking about that thing in a similar way, but that doesn't mean that you copied it from each other. So there are multiple reasons for why something may look like a nail, but doesn't mean that it's a nail. So I would say always in these cases, trust people and don't just say, oh, I have a tool and I can go and find all these other kinds of things with this tool first. Because that tool wasn't designed for those things.
President Gregory Washington (43:57):
Here's what we'll wrap up. Okay? Look, as we wind down here in time, there is clearly a significant amount of potential for AI, but there's also one risk that we haven't talked about, and that is that it can deepen existing inequalities, right? So how do we make sure that the work here at George Mason helps bridge these gaps and brings the benefits of AI to all communities? 'cause just like with the computer, right? Just like with the calculator before it, not all communities benefited the same from the technologies that were developed to help society. And with this technology, the impact that it's going to have and the speed at which it's going to be implemented means that some people will be definitely left behind. Let me give you an example. One of the things Elon Musk is doing with artificial intelligence and with his work now, he's, he's been, they've been developing robots.
President Gregory Washington (45:01):
Uh, human assistance. And those robots now are being test marketed in the homes of individuals, some famous individuals, right? So I read the other day where Kim Kardashian <laugh> has her personal assisted robot.
Amarda Shehu (45:15):
I read that too. <laugh>
President Gregory Washington (45:17):
That has, you know, that was developed through Elon Musk's company, right? This kind of thing. So you got some folk who are using the technology at that level, literally interacting day to day, hand to hand. And you got some folk who are still trying to determine what it is. They're still asking a question, what is AI and how does it affect me where I live? So there is a gulf that's widening, and with a rapidly developing technology like AI that gulf is going to expand and expand rapidly. And so how do we make sure the work here bridges the gaps to all communities?
Amarda Shehu (45:59):
Yeah. By the way, I wouldn't worry too much about that robot in Kim Kardashian's home. It's, you know, it's a little bit of a gimmick, right? It's not really an autonomous robot if you've been following the news. But anyway, I'm not worried about that. But I am worried about this gulf, right? This deepening inequalities. In some sense, I mean, you and I know this is a story of humanity, right? I, I grew up in a country where I, I didn't see a computer, I think till senior year in high school, okay? And I actually, I went to a mosque, they were giving training <laugh>. They were, yeah, there were, I don't know why in a mosque, but that's where I went. So I learned, okay, here's this thing and here's how you turn it on <laugh>. But on a more serious note, I do worry about leaving AI innovation only to companies because they have different objectives, right?
Amarda Shehu (46:46):
They're not necessarily thinking about the underserved populations, and they're not thinking about how do you lift everyone up. But here is why I am in a public institution, because only in a public university you have this concern, right? You're thinking about serving your region, you're thinking about serving your students, right? You're thinking about what does it mean student success, and how do I prepare them? How do I lift them up? Right? You're a deep believer in that opening it up to all students. And here is why I really want universities to claim their space and to go through really aggressive in AI innovation so that the narrative is not just in the companies. And that's why we keep thinking about, okay, educational programs, how can we prepare our students? And here's why we're thinking, how do we connect with community colleges so we can bring those students?
Amarda Shehu (47:32):
And here's why we're thinking about the K-12s, right? How do we prepare those students so we don't lose them early so we can help them? But how do I train teachers to teach with AI teachers in those K-12s, right? So that they can give that enthusiasm, their, that energy and that motivation to the students so we don't lose them. We have, I, I think you know this, we have two data labs, sort of two big data science projects here at Mason. These are big investments by Virginia that are trying to reach the rural regions in Virginia. Those are launching pads for us. We're thinking about how do we utilize them? So we expand from data science to AI, right? We also have faculty here. So think about inequalities, okay? How do they arise? How do we actually address inequalities? We have faculty whose research is specifically in this space, trying to better understand inequalities.
Amarda Shehu (48:22):
And then how do you change education so that you make sure that you bridge, right? You don't allow these inequalities. But honestly, whenever folks tell me, oh, you know, there's singularity coming and there's artificial general intelligence coming. I say, oh, stop doom scrolling. Just stop doing that. Think about the real dangers. The inequality is the real danger. And that is what we really have to, you know, on a daily basis, actively think, how do I tackle, what do I do so that my kids are not left behind? All right, I'm worried about my kids. So go away from the abstract. Think about what about your kids? What are you doing for your kids? How can we help? You know, those kids there in, in Hampton Roads in Virginia. What are we doing about them? And so that's why I think this is the charge of our times, I believe, of universities. How do you make sure that you bring everybody so we can all participate in this new digital society in which, you know, we're already living, but we are gonna keep kind of interacting more and more with AI in the future.
President Gregory Washington (49:23):
We are definitely, definitely at the forefront of this technology, but I think we're at the forefront in the right way. So I want to thank you for your engagement. I want to thank you for what you will do in the future. What we are gonna ask you to do and as we move this vital technology forward. You know, I expect that this is the first of many conversations that we will have around this topic of artificial intelligence, Amarda, thank you for sharing your expertise and for the leadership you bring to this university.
Amarda Shehu (49:59):
Thank you for having me.
President Gregory Washington (50:01):
Alright. I am George Mason President Gregory Washington. Thanks for listening. And tune in next time for more conversations that show why we are All Together Different.
Outro (50:17):
If you like what you heard on this podcast, go to podcast.gmu.edu for more of Gregory Washington's conversations with the thought leaders, experts, and educators who take on the grand challenges facing our students, graduates, and higher education. That's podcast.gmu.edu.
Topics
Listen to more episodes
- January 21, 2025
- October 21, 2024
- June 2, 2023
- January 25, 2023