Societal Sessions
This page outlines the themes and discussion questions for the Societal Sessions in our Machine Learning for Interdisciplinary Audiences course. Each topic is designed for a 60-minute participant-led session and includes key framing questions and interdisciplinary relevance.
π Topics Covered
- The EU AI Act β Comparing Regulation in the EU, China, and the US
- AI in Healthcare & Medicine β Ethical and Regulatory Concerns
- AI for Fake News Generation and Detection
- Impact of AI and Automation on the Future of Work / Labour Markets
1. The EU AI Act β Comparing Regulation in the EU, China, and the US
The EU Artificial Intelligence Act (2024) is the first broad regulatory framework designed to govern the development and use of AI systems. It sets a precedent for global governance by applying a risk-based classification system β banning some AI uses, tightly regulating others, and imposing transparency and safety obligations for general-purpose models. This landmark regulation aims to ensure that AI systems developed and used within the EU are safe, ethical, and respect fundamental rights, while also fostering innovation. For an interdisciplinary audience, understanding this is crucial as it will significantly impact how AI is developed, deployed, and governed across various sectors globally, setting a precedent for future AI legislation.
But how does this approach compare to Chinaβs more state-controlled framework or the United Statesβ more flexible, voluntary model? This topic invites analysis of how different regions are shaping the future of AI through regulation. You might explore policy goals, enforcement structures, cross-border compliance issues, or how regulations reflect different cultural and political values. A comparative perspective is encouraged, but you are free to focus more deeply on one region if you prefer.
Potential Discussion Points:
- Risk-Based Approach: Explain the core principle of the AI Act, which categorizes AI systems based on their potential risk (unacceptable, high, limited, minimal). Discuss examples for each category.
- Prohibited AI Practices: What AI applications are outright banned under the Act (e.g., social scoring by governments, real-time remote biometric identification in public spaces)? Why are these considered unacceptable risks?
- High-Risk AI Systems: What constitutes a high-risk AI system (e.g., in critical infrastructure, education, employment, law enforcement)? What are the stringent requirements for these systems (e.g., risk management systems, data governance, human oversight, transparency, cybersecurity)?
- Transparency Obligations: Discuss the transparency requirements for certain AI systems, such as chatbots or AI-generated content, to ensure users are aware they are interacting with or viewing AI-generated material.
- Impact on Businesses and Innovation: How will the AI Act affect companies developing or deploying AI, both within and outside the EU (extraterritorial effect)? Discuss the balance between regulation and fostering innovation. What are the potential penalties for non-compliance?
- Fundamental Rights and Ethical AI: How does the Act aim to protect fundamental rights (e.g., privacy, non-discrimination)? Discuss the broader ethical principles embedded in the regulation.
- Implementation and Timeline: Briefly touch upon the phased implementation timeline of the Act, with different provisions coming into effect over the next few years.
- Global Influence: How might the EU AI Act influence AI regulation in other parts of the world?
You are strongly encouraged to choose specific angles or case studies that you may find particularly relevant!
β¬ Back to Course Overview | β¬ Back to top
2. AI in Healthcare & Medicine β Ethical and Regulatory Concerns
Artificial intelligence is rapidly transforming healthcare β from clinical diagnostics to hospital logistics and patient risk prediction. These innovations promise faster, more personalized, and often more accurate care. Artificial Intelligence is poised to revolutionize healthcare and medicine, offering unprecedented opportunities to improve diagnostics, personalize treatments, accelerate drug discovery, and enhance operational efficiency. But as algorithms become more embedded in medical decision-making, they also raise critical ethical and regulatory questions: How do we ensure patient safety when using black-box models? Who is responsible for potential harm caused by an AI-driven diagnosis? How can we address bias in training data? This topic invites exploration of how AI is shaping modern medicine, and how ethics, regulation, and societal values intersect with clinical innovation. You could explore current regulations, global differences in governance, or specific controversies such as explainability, data privacy, or bias in training data.
Potential Discussion Points:
- Applications of AI in Healthcare: Discuss concrete examples of AI in use, such as:
- Diagnostics: AI assisting in interpreting medical images (X-rays, MRIs, CT scans) for earlier and more accurate disease detection (e.g., radiology, pathology).
- Drug Discovery & Development: AI accelerating the identification of new drug candidates, predicting molecular interactions, and optimizing clinical trial design.
- Personalized Medicine: AI analyzing patient data (genomics, medical history, lifestyle) to tailor treatment plans and predict individual responses to therapies.
- Predictive Analytics: AI forecasting disease outbreaks, patient deterioration, or hospital readmissions.
- Robotics in Surgery: AI-powered robots assisting surgeons with precision and minimally invasive procedures.
- Administrative Efficiency: AI automating tasks like appointment scheduling, medical coding, and insurance claims processing.
- Benefits: What are the primary advantages of AI in healthcare (e.g., improved accuracy, faster diagnoses, reduced costs, enhanced patient outcomes, addressing workforce shortages)?
- Challenges and Risks: Discuss the significant hurdles and dangers:
- Data Privacy & Security: Handling sensitive patient data, cybersecurity risks, and ensuring compliance with regulations (e.g., HIPAA, GDPR).
- Bias and Fairness: AI models trained on unrepresentative data can lead to biased diagnoses or treatments for certain demographic groups, exacerbating health disparities.
- Transparency & Explainability (Black Box Problem): The difficulty in understanding how complex AI models arrive at their conclusions, especially in critical medical decisions.
- Regulatory Hurdles: The slow pace of regulation compared to rapid technological advancements. How to ensure safety and efficacy of AI medical devices?
- Accountability & Liability: Who is responsible when an AI system makes an error that harms a patient?
- Human-AI Collaboration: The role of human clinicians in an AI-augmented healthcare system. Will AI replace doctors or augment them?
- Ethical Considerations: Explore broader ethical dilemmas:
- Informed consent for AI use in treatment.
- Maintaining human empathy and compassion in care.
- Equitable access to AI-powered healthcare technologies.
- The potential for over-reliance on AI and deskilling of medical professionals.
You are strongly encouraged to choose specific angles or case studies that you may find particularly relevant!
β¬ Back to Course Overview | β¬ Back to top
3. AI for Fake News Generation and Detection
Generative AI has dramatically lowered the barrier to producing persuasive fake content β from fabricated news articles and synthetic images to deepfake videos and voice clones. This makes it harder for individuals to distinguish between authentic and fabricated information, leading to confusion, skepticism, and the potential for manipulation on a massive scale. In an increasingly digital world, the proliferation of misinformation and disinformation poses a significant threat to democratic processes, public trust, and individual well-being.
This raises profound questions for society: How do we protect public discourse and democratic institutions from automated misinformation? What role should governments, tech companies, and civil society play in combating these threats? And what tools do we have to detect and defend against AI-generated manipulation? This topic invites you to explore how AI is weaponized for information warfare, how detection techniques are evolving, and what broader societal, political, or psychological challenges are involved. Your presentation could focus on technical, political, or ethical aspects β or combine them.
Potential Discussion Points:
- The Scale of the Problem: How has generative AI (e.g., large language models, image generators) changed the landscape of fake news creation? What are deepfakes, and what specific threats do they pose (e.g., to individuals, elections, public figures)?
- Methods of Generation: Briefly discuss how AI can be used to create synthetic media (e.g., text generation, image manipulation, voice cloning, deepfake videos). What makes AI-generated fake content so convincing?
- The Detection Challenge: What are the current approaches to detecting AI-generated fake news? How effective are AI-powered detection tools, and what are their limitations (e.g., the arms race between generation and detection)?
- Societal Impact: How does the spread of AI-generated fake news affect public discourse, trust in institutions, and individual decision-making? What are the psychological and social consequences?
- Mitigation Strategies: Beyond technical detection, what other strategies can help combat AI-driven misinformation (e.g., media literacy, fact-checking, platform policies, regulation)? What role do individuals, tech companies, governments, and educational institutions play?
- Ethical Considerations: What are the ethical dilemmas associated with AI-generated fake news? How can we balance freedom of speech with the need to combat harmful disinformation?
You are strongly encouraged to choose specific angles or case studies that you may find particularly relevant!
β¬ Back to Course Overview | β¬ Back to top
4. Impact of AI and Automation on the Future of Work / Labour Markets
AI and automation are changing the nature of work in profound and sometimes unpredictable ways. From robotic process automation to large language models replacing routine cognitive tasks, many jobs are being restructured β or eliminated. Artificial Intelligence and automation are rapidly transforming the global labor market, raising both hopes for increased productivity and fears of widespread job displacement. Understanding this dual impact is crucial for individuals, businesses, and policymakers to navigate the evolving landscape of work, prepare for future demands, and ensure an equitable transition.
But is this change necessarily negative? Will AI displace workers, or simply transform how we work? What types of jobs are most vulnerable, and what new kinds of work might emerge? This topic is a chance to explore how technologies affect labour markets, inequality, education, and job design. You might focus on economic forecasts, skill-shifts, policy responses, or ethical questions about automation. You are encouraged to critically examine both utopian and dystopian narratives around βthe future of work.β
Potential Discussion Points:
- Job Displacement vs. Job Creation: Discuss the debate around whether AI will lead to net job losses or create more new jobs than it displaces. Which types of jobs are most at risk of automation (e.g., repetitive, routine tasks)? What new roles are emerging (e.g., AI trainers, prompt engineers, AI ethicists)?
- Job Augmentation and Transformation: How is AI augmenting human capabilities, making workers more efficient and productive? Discuss how AI can free up humans from tedious tasks, allowing them to focus on more creative, strategic, and interpersonal aspects of their roles. Provide examples of AI tools used in various professions.
- Skills Gap and Reskilling: What new skills are becoming essential in an AI-driven economy (e.g., digital literacy, critical thinking, problem-solving, adaptability, emotional intelligence)? Discuss the importance of lifelong learning, upskilling, and reskilling initiatives for individuals and the role of education systems and employers in facilitating this transition.
- Economic and Social Inequality: How might AI and automation exacerbate existing inequalities or create new ones? Discuss the potential for a widening gap between high-skilled workers who can leverage AI and those in roles susceptible to automation. What are the implications for income distribution and social mobility?
- Policy Responses: What role can governments and international organizations play in managing the transition (e.g., universal basic income, retraining programs, new labor laws, social safety nets)?
- Ethical Considerations: Discuss the ethical implications of AI in the workplace, such as algorithmic bias in hiring, surveillance, and the impact on worker well-being and autonomy.
You are strongly encouraged to choose specific angles or case studies that you may find particularly relevant!