Zoom, the popular video conferencing platform, is planning to roll out a new AI feature that could transform the way users communicate online. Announced at Zoom’s annual developer conference, this upcoming tool will allow users to create a photorealistic, AI-generated avatar of themselves, designed to facilitate asynchronous communication. However, despite the innovation, the feature is already raising concerns about potential misuse and the risks of deepfake technology.
A Leap Towards AI-Enhanced Communication
Set to launch in 2025, Zoom’s new feature will allow users to upload a short video clip of themselves. The platform will then convert this into a digital avatar, complete with realistic head, shoulder, and upper arm movements. Users can then script their avatars to deliver messages, making it easier to handle communication tasks such as meetings or presentations without being present.
Smita Hashim, Zoom’s Chief Product Officer, explained that these avatars are designed to streamline communication by allowing users to engage more efficiently. “Avatars save users precious time and effort recording clips, and enable them to scale video creation,” Hashim stated during her presentation at the conference.
While the potential for increased productivity is clear, the feature may also introduce significant risks, particularly in the context of the growing misuse of deepfake technology.
Deepfake Technology on the Rise
Deepfake technology, which uses AI to create lifelike digital copies of individuals, has been gaining traction in recent years. From viral social media posts featuring celebrities to political deepfakes, the line between reality and fabrication has become increasingly blurred. For instance, this year has already seen deepfake videos featuring public figures such as U.S. President Joe Biden and pop star Taylor Swift, misleading millions of viewers online.
These AI-generated clips are not just limited to entertainment, either. Deepfakes have been used in a wide array of criminal activities, including scams where fraudsters impersonate loved ones. According to a 2023 report from the U.S. Federal Trade Commission (FTC), impersonation scams linked to deepfake technology resulted in financial losses exceeding $1 billion.
Given this alarming trend, Zoom’s new AI avatar tool is being met with concerns about how it could potentially be misused for harmful purposes.
Zoom’s Safeguards and the Deepfake Dilemma
Although Zoom acknowledges these risks, the company has yet to offer clear and detailed measures to prevent the misuse of its AI avatars. According to Smita Hashim, Zoom plans to introduce “advanced authentication” mechanisms and watermarking as part of the tool’s built-in safeguards. “We employ technology to make it obvious when a clip is generated with an avatar and to help ensure the integrity of avatar-generated content,” Hashim explained.
However, skeptics argue that these protections may not be sufficient. Watermarks, for instance, are easily removable using basic screen-recording tools or editing software, which could make it challenging to distinguish legitimate uses from malicious deepfakes. Although Zoom’s terms of service will prohibit misuse, the enforcement of these policies remains a question mark.
Zoom is not the first company to explore AI-powered avatars. Tavus, a similar tool, helps brands create virtual personas for personalized advertising, while Microsoft launched a service in 2022 that generates digital replicas of individuals. Both companies have incorporated strict safeguards, such as requiring verbal or written consent before using avatars. Zoom, however, has been less specific about its own protocols for preventing abuse, leaving many questions unanswered as the company prepares for the tool’s launch.
Potential Legal Implications and Regulatory Efforts
As the proliferation of deepfake technology continues to spark concerns worldwide, regulatory bodies are grappling with how to combat its dangers. The United States, while lacking federal laws that specifically criminalize deepfakes, has seen more than 10 states introduce legislation to address AI-assisted impersonation. One such law, proposed in California, could become the first to allow courts to demand the removal of deepfake content or impose financial penalties on offenders.
In the UK and Europe, where GDPR laws already regulate the handling of personal data, there are growing calls for stricter oversight on AI technologies like deepfakes. Policymakers are concerned that AI advancements, like Zoom’s avatar tool, could outpace the legal frameworks needed to protect citizens from manipulation and fraud.
The Road Ahead for Zoom’s AI Avatars
Zoom’s CEO, Eric Yuan, has long envisioned a future where AI can take over many routine communication tasks, from answering emails to participating in meetings. The AI avatar tool represents a step toward that vision, promising greater convenience and flexibility for users.
But as the platform continues to develop and refine this feature, Zoom must also address the very real risks associated with deepfake technology. With the release of AI avatars planned for early 2025, the spotlight will remain on Zoom to ensure that its innovation doesn’t contribute to the growing problem of AI-generated disinformation.
While the promise of more productive and efficient communication is appealing, the balance between innovation and security will be crucial for the success of Zoom’s new feature. Whether Zoom’s safeguards will be enough to prevent misuse remains to be seen, but for now, the company is preparing to navigate a delicate path between progress and protection.
As deepfake technology continues to evolve, businesses and governments alike will need to stay vigilant to prevent its abuse — and Zoom is no exception.