Meta has implemented a policy that excludes political entities and other regulated sectors from utilizing its emerging generative AI advertising capabilities.
This move, announced by a company representative on Monday, is aimed at curbing the use of advanced technology that could potentially amplify the dissemination of false information during election periods.
While the tech firm‘s advertising guidelines forbid the use of content already refuted by its network of fact-checkers, there are currently no specific regulations regarding AI-generated content.
The prohibition on certain advertisers was made public via updates to the company’s help center on Monday evening.
In a statement included in various sections detailing the operation of these AI tools, Meta specified, “In our ongoing trials of new Generative AI tools within Ads Manager, we do not allow advertisers who deal with Housing, Employment, Credit, Social Issues, Elections, Politics, Health, Pharmaceuticals, or Financial Services to employ these Generative AI functionalities.”
Meta believes this measure will aid in the comprehensive assessment of possible risks and aid in developing robust controls for the deployment of Generative AI in advertising, particularly for content that touches on sensitive matters in regulated fields.
This development follows Meta’s announcement last month of its intent to broaden access to its AI-driven ad tools, which have the capacity to autonomously generate backgrounds, tailor images, and produce various advertising texts in reaction to simple prompts.
These tools, which had been accessible to a select group of advertisers earlier in the year, are expected to be available to all advertisers worldwide by 2024.
Meta, along with other tech giants, has been rapidly introducing generative AI advertising products and virtual assistants. This surge in activity has been partly sparked by the excitement surrounding OpenAI’s ChatGPT, a chatbot launched last year that can generate human-like text responses.