by Konstantinos Karakostas, PhD – Director of Machine Learning, Commetric

Amid all the hype around ChatGPT, Bing Chat, Google Bard and other fascinating user-facing tools launched, we are all reading more and more stories online about the potential impact of generative artificial intelligence. Generative AI has the potential to transform the way we live and work, but it is important to understand its limitations and use it responsibly. This technology also presents a compelling opportunity for the communication measurement and evaluation industry to become more efficient and productive. Let us now look into some key concepts.

The lay of the land

Generative AI refers to the ability of Artificial Intelligence to generate content that is new and original, based on the input it receives. This is in contrast to traditional AI, which operates based on specific instructions and algorithms. Generative AI can produce a wide variety of content, including text, images, and videos, that can be used for a variety of applications, such as content creation, product design, and even medical research. One of the key features of generative AI is its ability to learn from large amounts of unstructured data and use this knowledge to generate new content.

Generative AI is based on large language models (LLMs), which are trained on massive amounts of text data for natural language processing (NLP) tasks. The latest approach in LLMs is based on a neural network architecture, coined ‘Transformers,’ which combines transformer architecture with unsupervised learning to develop large foundation models. These foundation models serve as the starting point for the development of more specialised and sophisticated models that can be tailored to specific use cases or domains.

Know the limits

While generative AI presents a compelling opportunity to augment human efforts and make enterprises more productive, it is important to understand its limitations. One of the key limitations is the cost to train and maintain foundation models, which can be expensive for smaller enterprises to undertake on their own. Additionally, these models may not always be trustworthy, as they are often trained on large amounts of unstructured data, some of which may be biased or contain toxic information.

It is also important to note that generative AI is not a one-size-fits-all solution. Choosing the correct LLM to use for a specific job requires expertise in LLMs. For example, Google’s BERT is designed to understand bidirectional relationships between words in a sentence and is primarily used for task classification, question answering, and named entity recognition. On the other hand, GPT is a unidirectional transformer-based model primarily used for text generation tasks such as language translation, summarisation, and content creation.

In general, LLMs can generate coherent and complex text that resembles human writing, such as news articles, poetry, or even entire books. They can also perform language-related tasks such as language translation, summarisation, and answering questions based on text. However, LLMs still struggle with understanding the context, sarcasm, and humour in language and can produce biased or inaccurate results when trained on biased or inaccurate data. Additionally, they are not capable of understanding emotions, reasoning, or making ethical judgments.

PR’s AI revolution?

LLMs are already showing great promise for the PR measurement and media analytics industry, with their ability to quickly and accurately analyse large amounts of text data. One of the main benefits of LLMs is that they can easily identify patterns and trends in almost any language, providing valuable insights into how media are reporting or how users on social media are talking about a brand, product, or service.

In the world of media analytics, LLMs are being used to track and analyse mentions of a brand across various media channels, including social media, news articles, and online forums. By understanding the sentiment and tone of these mentions, brands can gain valuable insights into how their products or services are interpreted by media stakeholders or target audiences.

LLMs are also being used to measure PR campaigns’ effectiveness – for instance in terms of achieving message pull-through. By analysing the language used in media coverage, LLMs can provide insights into messaging effectiveness and help identify areas for improvement.

However, it is important to note that LLMs are not a replacement for human analysis and interpretation. While they can quickly and accurately analyse large amounts of text data, they lack the ability to understand the nuances of language and context that a human analyst can provide, and would also lack the background needed to understand clients and tailor insights to their communication objectives.

Overall, generative AI and LLMs are powerful tools for PR measurement and media analytics. They have the potential to transform the way we work, but it is important to understand their limitations and use them responsibly – particularly when integrating generative AI output into client-facing platforms or reports. We as a professional community need to understand how we can make the best use of generative AI technologies and how we can participate in and shape important conversations around ethics and compliance in relation to generative AI.

This article is part of AMEC’s The Innovation Hub Series