The company has already been subject to data protection complaints in several EU countries.
US tech giant Meta confirmed to L’Observatoire de l’Europe today (July 18) that it will not deploy its multi-model AI models, known as virtual assistants, in Europe due to regulatory uncertainty.
“We will launch the multimodal Llama model in the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment,” a Meta spokesperson said.
The news, first reported by Axios, comes as the company halted use of its AI assistant in Europe after the Irish Data Protection Commission asked Meta to delay plans to use data from adult users of Facebook and Instagram to train large language models. (LLM).
Meta has updated its privacy policy to require the collection of all public and non-public user data – except for conversations between individuals – for use in current and future AI technology, which is expected to launch on June 26.
In response, the Austrian privacy watchdog NOYB filed complaints with privacy watchdogs in eleven EU member states, claiming Meta’s practices did not comply with the EU’s General Data Protection Regulation (GDPR).
NOYB requested an “urgent procedure” under EU data protection rules. He believes that this change is a concern because it affects the personal information of about 4 billion Meta users.
Calling it a “backward step” for European innovation, Meta said at the time that he was very confident his approach was in line with European laws and regulations.
Ireland’s Data Protection Commission (DPC) told L’Observatoire de l’Europe in June that “Meta has delayed the release following a number of DPC queries being answered.” The DPC said Meta gave users four weeks’ notice before the initial training.
Meta has its own large language model called Llama, the latest version of which (Llama 3) was released in April and is used to power the Meta AI assistant.