Visa The Embedded Lending Opportunity April 2024 Banner

DoorDash Deploys AI to Detect, Prevent Verbal Abuse on Platform

DoorDash has added an artificial intelligence (AI) feature designed to detect and prevent verbal abuse or harassment on the platform.

The new SafeChat+ is meant to protect both customers and delivery drivers, the on-demand ordering and delivery platform said in a Tuesday (March 12) press release.

If the feature detects inappropriate conversation between a customer and a driver, it will give the driver the option to cancel the order, according to the release. Doing so will not impact the driver’s rating.

If an inappropriate conversation happens after the order has been completed, the feature will end any further chat, the release said.

In cases in which it’s the driver who has made inappropriate conversation, the customer can reach out to DoorDash support via chat or phone to receive assistance, per the release.

“Our Trust & Safety team will investigate all incidents identified by the new tool and take appropriate actions to enforce our policies, which strictly prohibit any verbal abuse or harassment,” DoorDash said in the blog post.

While “more than 99.9% of deliveries” on the platform are completed with no safety-related incident, verbal abuse or harassment is the most common such incident that happens on DoorDash, according to the post.

The new feature uses AI to detect these incidents by reviewing more than 1,400 messages per minute and monitoring them for signals of inappropriate communications, the post said.

SafeChat+ covers “dozens of languages,” including English, French, Spanish, Portuguese and Mandarin, per the post.

“DoorDash takes privacy extremely seriously and that’s why the new feature does not access any personal information and only looks at the content of the message to identify inappropriate, abusive or harassing language,” the company said in the blog post.

In another development around using AI to monitor content, OpenAI said in August 2023 that companies can use its large language model (LLM) to develop AI-assisted content moderation systems.

The company’s GPT-4 LLM can enable companies to perform content moderation with more accurate and consistent labels, a faster feedback loop for policy refinement and a reduced need for human intervention, OpenAI said in an Aug. 15 blog post.