Professional networking platform LinkedIn has confirmed that it automatically uses personal user data to train artificial intelligence (AI) models without first informing its members.
The California-headquartered company said in a Sept. 18 blog post that it has updated the privacy policy element of its terms of service to include language clarifying how it uses the information shared with it “to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation (‘generative AI’) and through security and safety measures.”
The platform said that there is an opt-out setting for members when it comes to using their data for generative AI training.
LinkedIn is owned by Microsoft, which has invested heavily in OpenAI, the developer behind ChatGPT. According to the FAQ section of the platform’s website, the AI models used to power generative AI features may be trained by LinkedIn or another provider, such as Microsoft’s Azure OpenAI service.
The networking site said it uses generative AI for features such as its writing assistant and for suggesting posts or messages.
Personal data such as user posts, usage information, inputs and resulting outputs, language preferences, and any feedback they may provide is among the data processed and used to train AI, LinkedIn said.
When LinkedIn trains generative AI models, it seeks to “minimize personal data in the data sets” used to train them, including by using privacy-enhancing technologies that redact or remove personal data from the training dataset, the company said.
LinkedIn said the updates to its terms of service will go into effect on Nov. 20.