When it comes to research involving human subjects, strict guidelines are in place under the Institutional Review Board (IRB) framework. This vital process ensures that studies are conducted ethically, but a glaring gap has emerged concerning federally funded research involving Artificial Intelligence and Large Language Models (LLMs). As it stands, experts argue that these AI systems, including widely used models like ChatGPT and Claude, operate without this critical oversight, potentially endangering U.S. citizens.
The IRB rules, known as “The Common Rule,” mandate that any federally funded research involving human subjects undergo an IRB review. Critics express concern that major tech companies have sidestepped these regulations, posing significant risks. One legal authority emphasized to the Gateway Pundit that “under these rules, if you read them closely, at a minimum, HHS should be terminating every single federal contract at a university that works on Artificial Intelligence.” This urgency to address compliance issues springs from historical precedents, like the Facebook algorithm manipulation incident in 2014, where nearly 700,000 users were unwittingly part of a study affecting their mental health.
Recent discussions have highlighted the alarming effects of LLMs on user psychology. A mounting body of evidence suggests that interactions with these systems can lead to isolation, dependency, and skewed perceptions of reality. The potential for users to develop attachments—akin to toxic relationships—raises further ethical concerns. A study titled “Illusions of Intimacy” identifies risks among specific demographics, particularly young men with maladaptive coping styles, underscoring that these AI platforms might inadvertently facilitate harmful behaviors.
Moreover, the companies behind these technologies often collect user data without obtaining explicit consent, despite needing to under IRB standards. A legal expert reiterated this point, stating, “If a human being wants to work in these fields, they have to spend years in training,” in stark contrast to the unregulated space occupied by LLMs. This lack of accountability leads to troubling outcomes, including the phenomenon known as “AI hallucination,” where the systems generate false information and citations—a problem that experts acknowledge is worsening.
The regulatory environment remains largely untested and inadequate, leaving a significant gap in oversight for technologies that touch millions of lives. As the federal government continues to invest in AI research without appropriate safeguards, the chorus of experts calling for reform grows louder. It raises a fundamental question: who is responsible for ensuring safety and ethics in this rapidly evolving landscape?
"*" indicates required fields