Profanity filtration, also known as profanity filtering, is the process of automatically detecting and blocking offensive or inappropriate language in text, audio, or video content.
We have implemented filters to help protect vulnerable users from exposure to offensive language, reducing potential emotional harm. Currently, we have set it up on our Peer-to-Peer Texting campaign module.
To start with, you need to enable the Profanity Filtration feature while setting up the campaign. Along with this, you need to specify the username of the agents who can review the profane messages. Any explicit or vulgar messages from a contact will be blocked and sent for review by these particular agents.
Next, whenever a profane message is detected, the system will stop it from going to the agent and send it to the reviewer agent(s) in their spam folder for review. The reviewer can mark the message as non-profane to send the message back to the receiver agent.
As a campaign manager, you will be able to view the messages and the contacts that sent the profane messages in the Campaign Overview and the Results section.
You will be able to view the number of messages that are blocked due to the Profanity Filter and the number of unique contacts that sent profane messages. By clicking on "View messages" or "See contacts," you will be redirected to the Responses section, where you can see the list of messages and the contacts that sent them.
You may also navigate to the Responses section directly, choose the status "Profanity detected conversation," and click "Apply Filter" to view the list of messages or contacts that sent profane messages.
You can also see the messages that were sent in the campaign by Exporting Results with the applied filters.
Profane message detection on the Agent console -
Whenever profanity is detected in an incoming message from a contact, the message will be filtered and will be available to view to the campaign manager on the admin console and the reviewer agent(s) on the agent console.