A UK-based AI company, Haiper.ai, allows unrestricted creation of ‘potentially harmful’ content, raising safety concerns.
- Haiper.ai, backed by Octopus Ventures, lacks effective safeguards compared to industry peers.
- UKTN testing revealed the platform generates misleading images of public figures.
- Haiper.ai’s terms discourage misuse, yet enforcement appears insufficient.
- Generative AI’s role in misinformation and scams sparks debate.
A recent investigation has raised questions about the safety protocols of Haiper.ai, a generative AI startup based in London. This company, supported by Octopus Ventures, has notably less effective safeguards than many of its contemporaries, according to tests carried out by UKTN. Concerns have emerged over the platform’s ability to generate ‘potentially harmful’ content without adequate restrictions.
UKTN’s testing demonstrated the platform’s capacity to produce misleading images of well-known figures, such as a burning Israeli flag held by British Prime Minister Keir Starmer and a meeting between Donald Trump and popstar Taylor Swift. This issue highlights the risk of the tool being leveraged by malicious entities to disseminate false information or incite divisiveness.
Despite launching with an impressive £11 million investment led by Octopus Ventures, Haiper has faced scrutiny for its insufficient content moderation measures. Similar platforms, such as OpenAI’s Dall-E, enforce stricter rules to prevent the misuse of their technology for fabricating images of individuals without consent.
AI safety experts have long cautioned against the unchecked application of image and video AI technology, especially its use to impersonate real persons. From non-consensual deepfakes to bogus political statements and public figure endorsements, developers are urged to prioritise public safety when deploying such technologies.
While Haiper’s terms of use discourage inappropriate application of their AI, UKTN’s experiments indicated a gap between policy and practice. Although the system claims to detect breaches of its acceptable use policy, it failed to flag UKTN’s tests, prompting concerns over the platform’s reliability in preventing misuse.
The UK’s political sphere has previously encountered AI-generated disinformation, with fraudulent audio clips of politicians circulating online. Such incidents have drawn attention to the broader implications of AI innovations in shaping public opinion and the potential for scams using fabricated endorsements by trusted personalities.
Moreover, Martin Lewis, a well-respected finance expert, has publicly condemned scams that falsely used his identity to promote investment schemes. His legal battle with Facebook over similar incidents underscores the seriousness of protecting public figures from such misrepresentations.
The situation with Haiper.ai underscores the critical need for robust protective measures in generative AI to prevent exploitation and safeguard public trust.