Instagram introduced two new features recently. These two features are intended to prevent harassment and negotiation or online bullying on the platform.
"It is our responsibility to create a safe environment on Instagram. This has been an important priority for us for some time, and we continue to invest in better understanding and addressing this problem," said a statement from Adam Mosseri, Chair of Instagram which focuses visual owned by Facebook, as reported from the Straitsstimes page, Tuesday (07/09/2019)
One new feature that is launched is a warning to users if their comments may be considered offensive before they are posted. This feature is set up with artificial intelligence technology to let you know. Later the user will receive a notification "Are you sure you want to post this?".
Then Instagram will direct users to read more about Instagram policies about content that is bothersome and offensive.
"This intervention gives people the opportunity to reflect and cancel their comments and prevent recipients from receiving dangerous comments," Mosseri said.
According to him in the initial testing of this feature, it has found that the feature has prompted some people to cancel their comments.
"And sharing something that isn't so painful once they have a chance," he added.
Furthermore, Instagram also tested a new feature that allows users to limit other users to their profiles. After the user is restricted, his comments about that specific profile will not be seen by the public.
Users will also not be able to see whether the user who limits himself is online or whether the user has read the message directly from them.
"We can do more to prevent bullying from happening on Instagram, and we can do more to empower bullying targets to defend themselves," Mosseri said.
Instagram's efforts to fight the problem of abuse and bullying are not new. Bullying has been part of the dark side of Instagram and other social networks for years.
In 2017, under CEO Kevin Systrom, Instagram created an option for users to hide comments on their posts. The filter uses artificial intelligence to identify spam and offensive comments in nine languages.
In 2018, the company began filtering comments that attacked a person's appearance or character or contained threats. Instagram expanded the initiative later that year by using artificial intelligence to proactively detect intimidation in photos and text, and tagged them for review by the Instagram Community Operations team.
