LinkedIn, the popular professional networking platform, is under scrutiny following a lawsuit accusing the company of improperly using private messages from its Premium users to train artificial intelligence (AI) models. According to the lawsuit, introduced in a California federal court, the platform allegedly implemented a privacy setting in August of last year that automatically enrolled users into a program allowing third-party entities access to their personal information.

The legal complaint suggests that LinkedIn intentionally obscured its intentions by altering its privacy policy a month later, subsequently stating that it could disclose user information for AI enhancement purposes. A spokesperson for LinkedIn categorized these allegations as "false claims with no merit," stressing that the adjustments made to user agreement terms were transparent.

Furthermore, the lawsuit raises concerns about LinkedIn's 'frequently asked questions' (FAQ) section, which reportedly indicated that users could opt-out of data sharing for AI training. However, it suggested that such a decision would not influence data already utilized for training purposes.

The legal action claims that LinkedIn's practices demonstrate a recurring pattern of avoiding accountability and concealing the reality of its data-sharing behavior. The lawsuit seeks $1,000 (£812) in damages for each affected user, citing violations of the US federal Stored Communications Act, along with additional claims related to breach of contract and California's unfair competition law.

Interestingly, an email from LinkedIn sent to users last year clarified that the company had not permitted data sharing for AI training within the UK, the European Economic Area, and Switzerland. With a user base of over one billion, nearly 25% of whom are located in the US, LinkedIn generated approximately $1.7 billion in revenue from premium subscriptions in 2023 alone, a sector that has rapidly grown in conjunction with its expanding AI features.