Conversation AI
Conversation-AI is an initiative to protect voices in conversation.
We develop machine learning models to classify the impact of comments
on conversations, and we serve these to platforms via the Perspective
API. We also conduct experiments and publish original research to
explore the strengths and weaknesses of ML as a tool for combating
online toxicity and harassment. Further details can be found in our
research resources page,
and our blog.
Vision
Globally, fewer people are silenced and more people are able to
safely engage in good faith discussion online. Our team leads as
an example of ethical practices in building technology.
Our Values
- Community: Communities should responsibly shape their discussions.
- Transparency: Open processes enable better outcomes and trust.
- Inclusivity: Diverse points of view make discussions better.
- Privacy: We are privacy conscious in design and execution.
- Topic-neutral: Good faith discussion can happen on controversial topics.
Initiatives:
Perspective API
Perspective (demo)),
(developer site)
is a free API that helps you host better conversations online. The API uses
machine learning to analyze a string of text and predict the
perceived impact it might have on a conversation. This prediction
comes in the form of a score, which you can use to give feedback to
commenters, help moderators more easily review comments, allow readers
to more easily find interesting or productive comments, and more (see
gallery of use cases).
Our models are not perfect and will make errors. It will be
unable to detect patterns of toxicity it has not seen before, and
it will falsely detect comments similar to patterns of previous
toxic conversations. To help improve the machine learning, the API
supports sending our team suggested scores - learn more at '
Contribute Feedback'.
Finally, to stay informed on new attributes, language support,
and features, we encourage you to join the
perspective-announce group.
Tune
Tune
(documentation)
is a Chrome extension that helps people adjust the level of toxicity
they see in comments across the internet.
Moderator
Moderator
is an open-source tool that uses machine learning to help moderators
identify and reduce toxicity in forums and comment sections.
Who is working on this?
This research effort is led by Jigsaw
and the Google Counter-Abuse Technology Team.