View a one-page infographic summary of the analysis
What’s the problem: While companies developing generative AI, from pioneering start-ups like OpenAI and Anthropic to established Big Tech entities like Microsoft, Google, Adobe and Snapchat, are leveraging PR and communications strategies to capitalise on the current hype surrounding the technology’s science-fiction-like promises, they’re missing a critical element of the narrative – consumers are increasingly worried about their privacy.
There is a significant disconnect between consumer expectations and organisations’ approaches around privacy regarding the use of AI, as a recent Cisco survey found. These concerns are being amplified by a growing number of media reports on high-profile companies like Apple, Bank of America, Samsung, JP Morgan, Amazon, Spotify, Verizon and Accenture, which are imposing restrictions on their employees’ use of the technology as they fear that AI may put them at risk of data leaks.
Why it matters: In the highly competitive generative AI field, the reputations of companies are not solely hinged on ground-breaking innovation, but are increasingly being defined by their ethical practices. Privacy concerns can lead to heightened public mistrust and scepticism, which in turn can adversely affect customer loyalty and impede user adoption. For start-ups, this can significantly impact their ability to attract funding, while for Big Tech firms, it can lead to regulatory fines and a decline in shareholder value. Furthermore, amid the surge in generative AI offerings, many may soon be forgotten after the current hype if they fail to adequately address this issue.
What are the main pain points: Our analysis of 1,094 English-language articles published in the last 12 months showed that journalists frequently underline the risk of unauthorised access to sensitive data. These concerns were amplified when OpenAI, the creator of ChatGPT, confirmed that a bug in the chatbot’s source code may have caused a data leak.
However, data leaks are just the tip of the iceberg. The complexities of data anonymisation are also often highlighted, with journalists noting that despite efforts to anonymise data before AI training, generative models may still inadvertently produce identifiable information. This interplay between data privacy and generative AI is further complicated by the issue of consent. A significant media narrative revolves around how consent is procured for the data used to train these AI models, and whether it fully satisfies ethical and legal standards.
Simultaneously, media discussions frequently cite the lack of control as another worry. Reporters have underscored that once data is fed into a generative AI model, it becomes part of the model’s internal knowledge, which can’t be easily ‘erased’ or altered, potentially infringing upon data subject rights. Alongside this, data misuse is a regularly recurring theme in media debates. Journalists have raised concerns about the potential for generative AI models to produce harmful or misleading content, amplifying anxieties over digital security.
Our analysis found that the way generative AI developers handle data privacy communication fails to reassure the public, resulting in a significant trust gap. Instead of explicit disclosures, companies often resort to general statements about using “publicly available” data and publicise updated terms of use updates as a response to negative media stories. As a result, data protection issues dominated the debate around many AI players:
Apart from OpenAI‘s aforementioned troubles, Google received negative publicity over the alleged use of Google Docs data for AI training and the postponement of its AI chatbot Bard in the EU, while Microsoft faced scrutiny over revelations that staff members can read users’ chatbot conversations. Similarly, Adobe has been queried about the potential use of user data for fueling generative AI services, while Snapchat and Luka garnered substantial backlash over privacy concerns relating to vulnerable minors.
What PR and comms should do: Companies operating in the realm of generative AI are predominantly employing traditional tech PR and communication strategies that focus on innovation narratives. However, the growing consumer expectations around data protection necessitates a shift in comms approaches: privacy needs to be framed not as a technical extra, but as an integral, core component of the product offering itself. In addition, amid the escalating media noise around AI, tech companies can leverage their commitment to privacy as a distinguishing factor to stand out from the crowd and stay relevant after the current AI hype fades away.
With extensive experience in working with tech firms, Commetric’s media analytics offers an invaluable resource for such proactive reputation management. It can provide insights into prevailing narratives and identify key influencers, painting a holistic picture of the data protection discourse while deep-diving into the key concerns and pain points. This data-driven understanding would allow PR and communications directors to craft strategies that not only effectively address these issues but also foster positive outcomes.
By integrating Commetric’s media analytics into their strategy, tech firms can transition from simply managing data protection concerns to actively championing privacy. Furthermore, we can help tech companies to position themselves as leaders in privacy protection by identifying untapped opportunities or ‘white spaces’ within the ongoing media discourse on data privacy. This approach can show that tech companies value ethical considerations as much as they do technological advancements, helping to balance innovation with consumer trust.