The AI Safety Summit, held in the UK last November, faced criticism for its perceived lack of impact, according to Labour MP Steve Race.
- Exeter MP Steve Race declared the previous government’s efforts to lead in AI regulation as ineffective, labelling the summit a ‘damp squib’.
- The summit was criticised for failing to deliver significant results or follow-up on its discussions.
- Labour Digital chair Casey Calista highlighted the exclusion of civil society as a major shortcoming of the event.
- Brittany Smith from OpenAI pointed out that the summit focused on future risks rather than addressing existing technological harms.
The AI Safety Summit, which took place in the UK last November, was positioned as a landmark event for establishing the nation as a leader in global AI regulation. However, Exeter MP Steve Race criticised the event, stating that it was a ‘damp squib’ that ultimately ‘didn’t go anywhere.’ Race noted that the UK, with its trusted status in international regulatory matters, was well-placed to lead such discussions, but the summit failed to make a lasting impact.
Exeter MP Steve Race expressed that the summit was a missed opportunity for the previous government to assert its influence in AI policy globally. He remarked that although the UK was ideally suited to convene global voices on AI policy, the summit’s failure lay in its lack of ‘follow-through.’ According to Race, while the Conservatives were right in attempting to lead, the event itself was underwhelming and did not move the conversation forward effectively.
Casey Calista, the chair of Labour Digital, echoed these sentiments and pointed out a significant shortcoming of the summit: the exclusion of civil society. She criticised the approach of the former government in treating civil society as an afterthought, indicating that a more inclusive strategy involving diverse voices would be adopted by the current Labour government in future AI regulatory initiatives.
Brittany Smith, head of UK policy and partnerships at OpenAI, provided further perspective by observing that the summit’s focus was disproportionately placed on potential future risks posed by AI while neglecting the immediate harms already being caused by such technologies. She referred to the pressing issue of flawed facial recognition technology used by law enforcement that could potentially ‘ruin lives’ through injustices.
The UK plans to host a subsequent event following the initial summit and the AI Seoul Summit in California this November, aiming to implement safety protocols highlighted in earlier discussions. The goal remains to solidify various safety measures discussed previously, contributing to a more substantial global AI safety framework.
The AI Summit’s shortcomings highlight the need for more inclusive and impactful discussions on AI regulation.