London start-up Checksteps raises £ 1.3million to use AI-powered content moderation solution
A gargantuan amount of data is published online every day, and it becomes really difficult to ensure that all of it is non-toxic or harmful. To facilitate this process, several companies such as Facebook, YouTube and others hire teams of moderators, whose sole purpose is to review and verify the comments and data posted. One solution to help speed up the process is to use artificial intelligence or AI, which is exactly what London-based startup Checkstep is offering.
Control step uses AI-based solutions that can provide contextual moderation. Since this is a software solution, it is obviously intended to be faster than humans. The company has now announced a funding round of £ 1.3million, which will help it further develop its offerings. Additionally, they hired Kyle Dent as the head of AI and ethics.
Bigwigs participated in the funding
The final round of funding for Checkstep was led by Shutterstock founder Jon Oringer, former Uber chief commercial officer Emil Michael, and former Microsoft chief corporate officer Charles Songhurst. VCs, angel investors and private investors also supported the rise. This new funding will be used by the company to advance its go-to-market plans and hire more people.
In a conversation with UKTN, the CEO and founder of the company, Guillaume Bouchard, said: “The bulk of the funding will go to R&D to evolve the software and policy coverage to provide more functionality needed for create a comprehensive “end-to-end” content moderation process. from defining several policies to managing the appeal process.
The company is also hiring Kyle Dent as the AI Ethics Officer at Checkstep. Bouchard says, “With Kyle’s focus on the intersection of people and technology, we aim to humanize our content moderation process and AI tools. His expertise will certainly help us ask the right questions when developing products to mitigate the potentially negative effects of AI deployments through serious consideration of ethical concerns.
Leverage AI to simplify tasks
Artificial intelligence is usually powered by datasets to learn from them. As it improves, it can be fed into new data, which it can browse and use quickly. It’s something similar with Checkstep as their software “allows users to define and create their own policies, test them back on historical data, run automatic content tagging (potentially using their own categorizer). internal AI) and manage the appeal process for users who disagree with the moderation decisions, ”notes Bouchard.
Checkstep offers internal training in AI and offers selected models to its users. “After training the model, we publish a detailed report visible in the Checkstep user interface with the model summary, ie performance measures on a set of evaluation, which can be chosen by the client, a summary of the data used, possible biases and other important details of the model in a “model map,” reveals Bouchard. Their system also collects feedback from moderators, which is used in template updates.
Speaking about the challenges, Bouchard notes that it is mainly a question of addressing the diversity of types of prejudice. In moderation of content, this can be particularly difficult since, for example, hate speech itself can be divided into ten different categories. The company addresses these issues while trying to make sure there is no AI bias, as training it to understand the nuances of the language and the context can be quite difficult. “At the same time, we need to form an AI system that actively promotes healthy conversations without censoring or limiting the freedom of expression of individuals,” adds Bouchard.
The competition, the future and more
Checkstep faces direct competition from companies that have developed internal systems of trust and security. However, their rivals usually have trouble doing everything on their own and that’s where the startup comes in. Checkstep is supposed to help these platforms by developing specialized models and allowing them to deploy their own models there.
“Other moderation societies also focus on subsets of prejudice, for example, disinformation or terrorist content, while others keep children safe,” notes Bouchard. “Checkstep offers a comprehensive set of solutions to tackle all online mischief, even with the ability to report multiple policies. ”
Speaking about the future of content moderation, Bouchard says that while his current focus is on removing toxic content and other types of damage, it has yet to evolve. He adds that it must become “one of the essential tools for protecting democracy”. He notes that content moderation censors freedom of expression, but then it is about removing the voices of bad faith actors who would deliberately disrupt conversations, which in turn increases freedom of expression for the whole population.
“Yet it is extremely important to balance action on content, i.e. ban without bias, but also to do so in a way that cannot be confused with any form of censorship. . For example, ensuring that people have the right to appeal and simplifying the process for everyone, ”concludes Bouchard.
Checksteo currently involves 20 people on a daily basis with offices in London, Sofia and soon in the USA. The company is currently recruiting in engineering and machine learning operations and plans to grow its sales and marketing team by the end of the year.