We combine ML content scoring and NLP topic clustering technology to identify harmful narratives online earlier & better than human analysis alone
About
Factmata has developed a Narrative Monitoring product which combines two unique and innovative technologies: Topic Clustering and Content Scoring. Firstly, Topic Clustering allows us to read many different opinions shared online across social media, blogs, forums, and news articles to identify similar opinions and cluster them together as human understandable ‘narratives’ which can be tracked over time. Secondly, our content scoring algorithms use Natural Language Processing to read text and identify signals of harmful content including racism, sexism, hate speech, fake news and other forms of content that might damage a brand's reputation. The combination of these technologies effectively means brands and PR agencies monitoring social media can identify harmful narratives earlier and better than human analysis alone. Earlier identification means quicker response and the potential for preventing that harmful narrative from spreading to mainstream news outlets. Better understanding of the narrative means a more targeted response resulting in more effective combating of that harmful narrative. In addition, we provide identification of the social media accounts authoring the content to support more direct action against bad actors or identification of broader networks of disinformation, misinformation, hate speech and other forms of harmful content. We have flexible graphs and reporting capabilities meaning anyone using our product to counter hate speech and discrimination can track their progress and the reducing impact of harmful online narratives.
Key Benefits
We have developed a topic clustering aspect to our tool which significantly reduces the time spent on making sense of the massive amounts of data available online about a brand or industry. Unlike our competitors, our AI is capable of finding similar ‘opinions’ and grouping them together to create what we call ‘narratives’. These allow our clients to see an overarching title that summarises all the underlying opinions, providing instant actionable insights. This topic clustering tool clarifies, simplifies and speeds up the lengthy process of finding the opinions which are valuable, among the hundreds of thousands of comments, tweets and mentions. We have 19 individually trained content classification models capable of detecting within these narratives every kind of harmful content that a brand's reputation could be damaged by. One of these models is a unique to Factmata, our 'Stance' model. Many content monitoring platforms still use the sentiment model of analysis which has proven to be inadequate and even to produce inaccurate results. Unlike Sentiment, our Stance model has been programmed to consider the subject(s) of a statement in order to accurately establish the opinion holders’ positioning in relation to the particular subjects. It doesn't just identify whether the words used are positive, negative or neutral, it can identify positive, negative or neutral intent towards the subject, e.g. a brand name. Our technology is built for scale, capable of processing millions of mentions in minutes, meaning we can serve thousands of clients via our self-serve model.
Applications
Our product caters to anyone. As such we work with individuals, businesses, non-profit organisations or anyone in need of understanding what is being said online and especially identifying harmful content. Our most prominent clients however are brands, marketing and PR agencies. Their analysts benefit the most from our product as it significantly reduces the time it takes them to find valuable insights in the masses of information available online. Instead of having to manually go through all platforms and sources to find information regarding a certain topic, spotting risks, or trends, tracking them, organising them, graphing and presenting them, we automate much of that work for the user.