AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?
Manage episode 448458563 series 3480798
🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining how this may conflict with the principles of free speech and access to diverse information.
📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[00:51] Q1-How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?
⏲️[06:53] Q2-Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?
⏲️[10:20] Q3-What changes would you recommend to better align chatbot moderation policies with free speech protections?
⏲️[15:18] Wrap-up & Outro
💭 Q1 - How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?
🗣️ "This is the first time in human history that new communications technology does not solely depend on human input, like the printing press or radio."
🗣️ "Limiting or restricting the output and even the ability to make prompts will necessarily affect the underlying capability to reinforce free speech, and especially access to information."
🗣️ "If I interact with an AI chatbot, it's me and the AI system, so it seems counterintuitive that the restrictions on AI chatbots are more wide-ranging than those on social media."
🗣️ "Would it be acceptable to ordinary users to say, you're writing a document on blasphemy, and then Word says, 'I can't complete that sentence because it violates our policies'?"
🗣️ "The boundary between freedom of speech being in danger and freedom of thought being affected is a very narrow one."
🗣️ "Under international human rights law, freedom of thought is absolute, but algorithmic restrictions risk subtly interfering with that freedom.(...) These restrictions risk being tentacles into freedom of thought, subtly guiding us in ways we might not even notice."
💭 Q2 - Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?
🗣️ "The AI act includes an obligation to assess and mitigate systemic risk, which could be relevant here regarding generative AI’s impact on free expression."
🗣️ "The AI act defines systemic risk as a risk that is specific to the high-impact capabilities of general-purpose AI models that could affect public health, safety, or fundamental rights."
🗣️ "The question is whether the interpretation under the AI act would lean more in a speech protective or a speech restrictive manner."
🗣️ "Overly broad restrictions could undermine freedom of expression in the Charter of Fundamental Rights, which is part of EU law."
🗣️ "My instinct is that the AI act would likely lean in a more speech-restrictive way, but it's too early to say for certain."
💭 Q3 - What changes would you recommend to better align chatbot moderation policies with free speech protections?
🗣️ "Let’s use international human rights law as a benchmark—something most major social media platforms commit to on paper but don’t live up to in practice."
🗣️ "We showed that major social media platforms' hate speech policies have undergone extensive scope creep over the past decade, which does not align with international human rights standards."
🗣️ "It's conceptually more difficult to apply international human rights standards to an AI chatbot because my interaction is private, unlike public speech."
🗣️ "We should avoid adopting a 'harm-oriented' principle to AI chatbots, especially when dealing with disinformation and misinformation, which is often protected under freedom of expression."
🗣️ "It's important to maintain an iterative process with AI systems, where humans remain responsible for how we use and share information, rather than placing all the responsibility on the chatbot."
📌 About Our Guest
🎙️ Jacob Mchangama | The Future of Free Speech & Vanderbilt University
🌐 Article | AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem
🌐 The Future of Free Speech
🌐 Jacob Mchangama
Jacob Mchangama is the Executive Director of The Future of Free Speech and a Research Professor at Vanderbilt University. He is also a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE) and author of “Free Speech: A History From Socrates to Social Media”.
25集单集