CILE-LCFI Winter School 2024: Exploring Ethical Dimensions of AI
The Center for Islamic Legislation and Ethics (CILE) in collaboration with The Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge recently concluded its much-anticipated Virtual Winter School on "AI and its Ethical Challenges: Religious and Cultural Perspectives." Held from September 24th to 27th, the virtual seminar addressed the burgeoning intersections of artificial intelligence (AI) with religious and cultural norms, posing profound questions and offering interdisciplinary insights into the future of AI ethics. Throughout the event, speakers from various specialisations including philosophy, bioethics, Islamic ethics, gender studies, and postcolonial studies, presented their research and engaged in spirited discussions that highlighted the complex landscape of AI development and deployment and the ethical challenges it poses.
DAY 1: Sept 24th, 2024
AI and Regulation: The HEAT Project and Toolkit
Dr Eleanor Drage (Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge)
Day one kicked off with Dr Eleanor Drage delving into the intricacies of AI regulation through the HEAT project, which integrates feminist and anti-racist principles into its framework, aiming to ensure compliance with the EU's regulations on high-risk AI applications. The workshop encouraged a hands-on approach where participants were actively involved in comparing different AI regulations and values across various frameworks, including the EU, Middle Eastern contexts, and those established by major tech companies. Additionally, attendees were given the opportunity to test the HEAT application, providing practical feedback to help refine its functionality and relevance.
The session highlighted the 'placelessness' of current AI legislation, which often overlooks cultural and geographic nuances, emphasising the necessity for AI policies that are not only internationally relevant but also culturally sensitive. Key questions raised during the session focused on the feasibility of a global, value-free ethical framework, the true beneficiaries of AI policies, the practicality of AI solutions in specific contexts, and the crucial role of community involvement in policymaking. These discussions underscored a critical takeaway: effective AI governance requires ongoing dialogue and adjustments that accommodate cultural differences, ensuring that technology serves humanity justly across all borders.
AI, Disinformation and Democracy: Challenges and Prospects
Dr Giulio Corsi (Research Associate at the Leverhulme Centre for the Future of Intelligence, University of Cambridge)
The following session was by Dr Giulio Corsi, who dissected the intertwined roles of AI, disinformation, and democracy, detailing how AI technologies shape and manipulate the information landscape. The session opened with a historical overview of disinformation, drawing parallels between past manipulations of public opinion and today's AI-driven challenges. Dr Corsi discussed the generation of synthetic media by AI, emphasising the technology's ability to produce realistic, tailored content that escalates the spread of misinformation. The discussion extended to AI's impact on democratic processes, highlighting risks such as election interference and societal polarisation, exacerbated by AI's capacity to personalise and amplify misleading content.
Key questions that surfaced during the discussion included inquiries about the potential for algorithms to learn from takedown requests to avoid amplifying similar misleading content and why there is no more proactive adjustment in social media recommendation algorithms to hinder the spread of misinformation. These questions underscored a widespread concern about the adequacy of current AI governance and the necessity for enhanced regulatory measures to safeguard democratic integrity in the age of digital information.
The session concluded with a consensus on the need for ongoing innovation in AI policy, emphasising collaborative international efforts to harness AI's capabilities responsibly while mitigating its threats to democracy.
Rethinking Human Uniqueness in the Age of AI: Human Intellect, Body and Religious Accountability (taklīf)
Dr Mohammed Ghaly (Director of the Research Center for Islamic Legislation and Ethics and Professor of Islam and Biomedical Ethics at The College of Islamic Studies, Hamad Bin Khalifa University)
Dr Mohammed Ghaly, director of the CILE Center, presented "Rethinking Human Uniqueness in the Age of AI," delving into the Islamic ethical perspectives on human uniqueness amidst AI advancements, focusing on human intellect and the body. He discussed the Islamic viewpoint on humanoid robots, highlighting historical fatwas that equate robotics specialisation with idol-making due to their resemblance to human creation, challenging divine exclusivity in creation. Yet, he contrasted this with Islamic historical practices where mechanical automata, such as Al-Jazri’s Ablution machine, were used for practical functions, not worship.
The talk also explored AI’s mimicry of human cognitive functions, examining how Artificial Neural Networks, designed after human neurons, pose ethical questions about the uniquely human attribute of 'aql, which is critical for religious and moral accountability. Participants engaged with thought-provoking questions on the compatibility of historical Islamic techno-optimism with modern AI challenges, the evolving definitions of blasphemy in response to new technologies, and whether human vulnerability can be a defence against AI's encroachments.
This session underscored the importance of adapting religious and ethical frameworks to address rapid technological advances without compromising fundamental human and religious values.
AI’s Moral Consciousness: The Ethical values embedded in the AI instructions
Dr Samer Rashwani (professor in the Masters of Applied Islamic Ethics at the College of Islamic Studies, Hamad Bin Khalifa University)
Day one concluded with a session presented by Dr Samer Rashwani, namely "AI’s Moral Consciousness: The Ethical Values Embedded in the AI Instructions," which explored the profound challenges of embedding ethical orientations within AI algorithms. He articulated that although AI lacks consciousness in the human sense, the ethical instructions it operates under reflect deliberate design choices that bear significant ethical implications. The discussion highlighted several frameworks like Algo-ethics, Responsible AI, Human-centred AI, and Trustworthy AI, which are critical in developing ethical guidelines and benchmarks to ensure AI’s ethical integrity.
Dr Rashwani also emphasised the potential of Islamic ethical traditions in shaping AI ethics, advocating for AI systems that resonate with cultural and religious values, particularly those emphasising justice, human dignity, and stewardship. He pointed out the risks of delegating ethical decision-making to AI systems and argued for restricting AI’s role in data collection and fact-checking to maintain human oversight and moral responsibility. This approach, he suggested, ensures AI remains a tool for ethical reflection, not an autonomous moral entity, thus safeguarding human values and accountability.
Day 2: Sept 25th, 2024
Imperial Laboratories: Understanding Data Colonialism and AI Empire in Historical Context
Dr Kerry McInerney (Research Associate at the Leverhulme Centre for the Future of Intelligence, University of Cambridge)
Day two of the Winter School commenced with Dr Kerry McInerney's talk, "Imperial Laboratories: Understanding Data Colonialism and AI Empire in Historical Context," which delved into the complexities of 'data colonialism' and its extension through AI technologies, drawing parallels with historical colonial practices. She highlighted how Big Tech employs techniques like those of past imperial regimes, extending exploitation and control into the digital realm. Dr McInerney utilised theoretical frameworks from studies of imperialism, racial capitalism, and comparative racialisation to explore these dynamics. Her discussion also included an examination of how postcolonial states such as China, India, and Israel repurpose historical colonial tactics to serve nationalist ends, using Darren Byler's concept of the 'subimperial state' to analyse these phenomena.
Key issues addressed were the perpetuation of global inequalities by AI technologies and the role of AI in both reinforcing and potentially mitigating these long-standing injustices. The session raised important questions about the scientification of racial discrimination through genetic testing, the applicability of critical disability studies to AI surveillance technologies, and initiatives aimed at 'decolonising AI.' Dr McInerney emphasised the need for an ongoing critical approach to understanding the roles of AI in contemporary imperialistic structures and called for more comprehensive frameworks for AI governance that account for these deeply rooted inequalities.
Gaming the Oppressive System: Dynamics of AI Power and Resistance in the Arab World
Dr Reham Hosny (Assistant Professor of Digital Literary Studies at Minia University and Associate Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge)
Dr Reham Hosny's presentation at the CILE-CFI Winter School, titled "Gaming the Oppressive System: Dynamics of AI Power and Resistance in the Arab World," provided an insightful analysis of the dual role of artificial intelligence within the Arab region. Dr Hosny explored how AI is utilised by authoritarian regimes and tech corporations for surveillance, censorship, and control, reinforcing existing power hierarchies. Conversely, she highlighted how Arab activists harness these technologies to challenge and subvert oppressive structures, turning tools of surveillance into mechanisms of resistance.
Key points from her discussion included the exploitation of AI for state and corporate surveillance, employing technologies like facial recognition and predictive policing to monitor and suppress dissent. In contrast, activists repurpose these technologies to bypass state-imposed censorship and organise protests, using creative digital tactics to evade algorithmic detection on platforms such as Facebook. Dr Hosny also stressed the importance of developing an ethical AI framework that is culturally and religiously informed, particularly one that aligns with Islamic ethical principles and the socio-political context of the Arab world.
The session raised critical questions about the possibility of establishing an international AI governance body that would include diverse ethical perspectives, and the ethical implications of social media platforms prioritising profit over people, especially in sensitive regions. Dr Hosny’s talk concluded with a call to action for creating AI systems that not only avoid reinforcing power imbalances but actively contribute to social and political liberation, underscoring the necessity for region-specific AI policies that harness AI's transformative potential responsibly.
Myth, Faith, and AI
Dr Kanta Dihal (Lecturer in Science Communication at Imperial College London and Associate Fellow of the Leverhulme Centre for the Future of Intelligence, University of Cambridge)
Dr Kanta Dihal’s session, titled “Myth, Faith, and AI,” delved into the profound impact of cultural narratives on the development and perception of artificial intelligence. Dr Dihal emphasised the bidirectional influence between science communication and public opinion, which not only shapes public understanding but also influences the research priorities within the scientific community.
Key points from her session included the significant role of cultural and historical narratives in shaping AI research. Dr Dihal pointed out that AI researchers are influenced by a variety of stories, from ancient myths to modern cinema, which shape their views on the potential and risks associated with AI technologies. She highlighted that these narratives often oscillate between extreme utopian and dystopian views in Western contexts, reflecting deeper cultural and philosophical attitudes towards technology and progress. The session also explored how myths and religious stories, like those of Prometheus or the Golem, have historically articulated human anxieties about creation and control—themes that are pertinent in current discussions on AI autonomy and capabilities.
Dr Dihal underscored the necessity of understanding these narrative influences to navigate the ethical landscapes of AI development and application effectively. The discussion raised several poignant questions, including the potential for these AI narratives to influence policymakers and the practical application of AI technologies. Concerns were also voiced about the compatibility of Western AI narratives with non-Western religious and ethical frameworks, and whether the prevalent narratives complicate the global governance of AI.
AI Mufti and the Value of Trust
Dr Mutaz Alkhatib (Assistant Professor of Methodologies and History of Islamic Ethics at the College of Islamic Studies, Hamad Bin Khalifa University)
Dr Mutaz Alkhatib's presentation, titled "AI Mufti and the Value of Trust," critically examined the integration of artificial intelligence (AI) into the process of issuing fatwas within Islamic jurisprudence. His talk highlighted the profound challenges and opportunities presented by AI in this traditional religious practice. Key points from his session included the potential of AI to enhance the efficiency and accessibility of fatwa issuance, but also the significant risks such as the loss of personal connection, the opacity of algorithmic decision-making, data biases, and the undermining of the traditional moral and ethical responsibilities held by muftis.
Dr Alkhatib stressed that while AI can support the dissemination of religious rulings, it should not replace the nuanced and context-sensitive judgment of a trained mufti. Furthermore, he proposed an ethical framework aimed at maintaining trust and integrity in AI-assisted fatwa issuance. This framework emphasises the need for human oversight, algorithmic transparency, mitigation of data biases, and the preservation of the human elements that are crucial in the fatwa process.
The key questions raised during the session reflected concerns about maintaining the diversity of Islamic jurisprudential sources in AI applications, the current real-world applications of AI in this field, and the practical and ethical implications of substituting human muftis with AI systems. Participants also queried the transparency of data used by AI systems and the potential impacts of AI on the traditional mufti-mustaftī relationship.
Protecting your Metaphysics: De-computerisation and the History of AI
Dr Jonnie Penn (Associate Teaching Professor of AI Ethics and Society, University of Cambridge)
The final speaker of the day was Dr Jonnie Penn, presenting "Protecting Your Metaphysics: De-computerisation and the History of AI," which offered a critical perspective on the history and future trajectory of artificial intelligence. Penn illuminated the complex origins and developments of AI, tracing its roots through various intellectual traditions and historical milestones. He critiqued the linear success narrative often presented in textbooks and popular media, proposing instead a history marked by significant shifts and failures which have broad material and ethical implications.
During the talk, Dr Penn highlighted the transitions from symbolic AI to expert systems, and eventually to connectionism that underpins modern machine learning and deep learning technologies. These shifts reflect changes not just in technological capabilities but also in the underlying goals and ethical considerations of the AI research community. Key points discussed include the impact of AI on statecraft, the computer industry, global finance, and empire building, demonstrating how AI's development is intertwined with broader socio-economic structures and power dynamics. Dr Penn emphasised the need for a critical approach to AI ethics that accounts for these influences, advocating for 'de-computerisation'—a reconsideration of the pervasive role of computing in society to mitigate its ecological, labour, and surveillance impacts.
The session spurred engaging questions about the role of control in AI development, proactive engagements with technology's potential dangers, and the adequacy of the addiction model in understanding digital technology's societal impact. Dr Penn's call for a deeper historical understanding of AI aimed to equip the audience with a more nuanced perspective that considers how past influences and present practices might shape the future of AI, urging for strategies that prioritise ecological sustainability and ethical integrity over unchecked technological expansion.
Student International Participation: Sept 27th, 2024
The final day of the 2024 CILE Virtual Winter School marked a pivotal day in this year’s Winter School, dedicated entirely to student presentations. The sessions highlighted the contributions of emerging scholars in the realm of AI and ethics. This inclusion of student research fostered a collaborative environment and bridged the gap between established academics and the next generation of scholars, enriching the seminar’s discussions with fresh perspectives and innovative ideas.
Hisham E. Hasan delved into the ethical considerations of implementing AI in pharmacy practice within the MENA region. His cross-sectional study highlighted significant concerns like patient data privacy and the impact of AI on employment, emphasizing the need for robust ethical frameworks in digital health.
Siti Liyana Azman presented a critical view of Islamic AI ethics, particularly focusing on its underutilization in Malaysia despite the country's majority Muslim population. She explored the competition between religious and secular ethics frameworks and constitutional barriers, presenting a compelling case for why Islamic ethics often remain sidelined in governmental policies.
Krutika Patel explored generative AI art’s role in visualizing alternate histories of South Asia, critiquing the reinforcement of cultural hegemonies and examining how these visualizations reflect decolonial curiosity yet perpetuate existing biases. Her research highlighted the need for a critical examination of AI models used in cultural representations.
Mudassar Baig addressed the ethical dimensions of the Metaverse, an area receiving increased attention post-Facebook’s rebranding to Meta. His research focused on the Islamic ethical perspectives on Metaverse technologies, identifying gaps in the literature and exploring the moral reasoning within online fatwa literature.
Mohammed Qasim Khan investigated the dynamics between Islamic revivalism and AI advancements, aiming to understand how AI technologies influence religious discourse and community engagement in the Islamic world. His systematic literature review sought to bridge the gap between Islamic traditionalism and technological progress.
Nursena Çetingül presented on the intersection of neurolaw, AI, and the Kalāmic view on brain-computer interfaces (BCIs). Her discussion focused on the ethical implications of accessing neural data through BCIs, the impact on free will, and the protection of human dignity within the Kalām tradition.
These presentations not only contributed to the depth of discussions on AI and ethics but also showcased the global and diverse academic engagement that defines the Winter School, emphasizing the ongoing need for an inclusive and critically engaged dialogue in the field.
The Way Forward
The 2024 CILE-CFI Virtual Winter School highlighted the pressing need for a thoughtful discourse on AI that includes a diverse range of cultural and religious perspectives. As AI technologies continue to permeate various facets of daily life, the insights generated from this seminar underscore the importance of an ethical framework that respects and integrates the global diversity of values and beliefs.
As AI continues to evolve, events like the CILE-CFI Virtual Winter School are vital for ensuring that ethical considerations remain at the forefront of technological advancements. The discussions and outcomes from this seminar will undoubtedly contribute to shaping a more inclusive and equitable future for AI development globally.