AI companies should not tackle ethical issues alone, says safety chief.
- Aleksandra Pedraszewska highlights potential bias from commercial interests.
- There are existing solutions for AI safety and ethical challenges.
- Collaboration with academia and regulation is necessary for ethical oversight.
- Recent incidents raise concerns over AI’s impact on user safety.
Aleksandra Pedraszewska, head of safety at a prominent London AI firm, argues that companies in the generative AI space should not navigate ethical concerns independently. She suggests that commercial interests could potentially bias ethical decisions within companies, and that a broader, external approach is needed for effective oversight.
Pedraszewska points out that there are already numerous reliable solutions available for addressing AI content moderation and managing problematic user behaviour. She advocates for adopting these existing resources which are well-researched and can be immediately implemented to enhance AI safety functions.
She emphasises the importance of collaborating with academic researchers and regulatory bodies, who have a deeper understanding of complex ethical and policy issues. This external collaboration is vital because commercial entities may lack the objectivity required for impartial ethical decision-making.
Pedraszewska’s comments come amidst heightened discussions about the need for stricter regulation of AI technologies, highlighted by incidents such as the tragic case involving a chatbot from Character.AI. This case underscores the potential risks AI poses when inadequately regulated or understood.
The AI industry has also seen significant leadership changes, with resignations from key figures concerned about safety standards. This includes recent high-profile departures from companies like OpenAI, indicating a possible rift between technological advancements and ethical responsibility.
Ensuring the ethical use of AI requires collective efforts from outside the industry, prioritising safety over competitive interests.