When artificial intelligence makes a mistake, the consequences can be annoying—or they can be life-altering. For one George Mason University computer science PhD student, the stakes are clear: biased algorithms in healthcare aren’t just a technical flaw, they’re a human risk. And with internship experience teaching him crucial skills beyond the classroom, Fardin Ahsan Sakib is ready to make a difference in healthcare.
Sakib’s research in natural language processing (NLP) addresses concerns that many may have when working with emerging large language models (LLMs), but when it is applied to health decisions, there could be deeper negative risks. Tools like ChatGPT pull from vast amounts of information, but they have sometimes been known to “hallucinate,” or make up information.
“Sometimes these systems are taking a shortcut to get you an answer,” said Sakib. “But regardless of where it comes from, it can provide you incorrect information.” And in the health care space, these LLMs could aid doctors, but not when they could be driving improper care.
In electronic health records (EHRs), patients’ diagnoses, medical history, and demographic data are stored and organized. This information includes social determinants of health, like employment status, family situations, or housing conditions, hidden in records.

“These factors can affect up to 80 percent of health outcomes,” Sakib said. “So it's very important that we can extract them correctly from clinical notes and that the model is not introducing any bias.”
Sakib poses a scenario: A doctor is using an NLP tool to quickly assess a new patient’s health. The LLM is pulling from the patient’s records, and the physician asks the system, “Is this patient a smoker?” The system quickly sifts through the data, and says that yes, this patient is a smoker. The physician then proceeds to make care recommendations based on this information. Maybe they order a lung cancer screening, or they speak with the patient about smoking cessation resources.
But what if that answer was incorrect? The LLM decided to take a shortcut, because, based off the information it has, it discovered that most people with similar demographic data to the patient are indeed smokers, but this one isn’t.
“There are two things at play: bias and hallucination,” said Sakib. “First, algorithms can only use the information they have, and if that information is missing an entire racial or ethnic group, it can be biased. Second, these models can use this biased information to take shortcuts, leading to them delivering false information.” Sakib isn't simply identifying these problems; he's developing solutions. His recent research on detecting and mitigating shortcut learning in health record processing was accepted for presentation at the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025), one of the premier conferences in natural language processing.
That desire to build reliable and trustworthy NLP tools has driven Sakib’s academic work and his professional pursuits.
At Brillient Corporation, where he interned last summer, he worked on creating a retrieval augmented generation system that connected an LLM to the Food and Drug Administration’s (FDA) knowledge base to make accessing and retrieving information faster and grounded in facts. Sakib and his colleagues submitted a patent for this effort as well.
This summer, he's deep into another high-impact internship at Amazon. “Amazon wants to automate support so that when you ask for help, a large language model can try to solve the problem before handing it off to a human,” he said.
Despite the different settings, he sees clear continuity between his internships and academic work. “The industry experience helps me in my research, and my research experience helps in industry. It goes both ways,” he said. “In academia, collaboration is usually focused within a lab or research group. In industry, I’ve worked with engineers, product managers, and domain experts all at once, spanning from health regulators to AWS cloud architects. That diversity of perspectives changes how you solve problems, and I’ve brought that mindset back into my research collaborations.”
It’s that dual perspective—academic precision paired with industry scale—that he plans to take with him into industry after graduation.
“George Mason has prepared me for a lot. From day one, all of the professors have helped me grow as a researcher, a person, and as a team member,” said Sakib.