Apple Intelligence Writing Tools Struggle with Swears and Sensitive Topics

0

With the release of the iOS 18.1 developer beta, Apple introduced users to its new Apple Intelligence features. Among these is Writing Tools, designed to reformat or rewrite text using Apple’s AI models.

However, the tool has revealed some limitations, particularly when dealing with certain words and topics.

 

Apple Intelligence can be summoned anywhere across the system to adjust text. However, when users attempt to rewrite paragraphs or sentences containing swear words like “sh**” and “bastard,” they are met with a warning message.

 

This message indicates that “Writing Tools was not designed to handle this type of content,” suggesting that the quality of AI-generated suggestions may vary.

 

This issue isn’t isolated. Other users have also encountered warnings when the tool struggles to maintain the original tone of the writing. In addition to swear words, references to drugs, killing, or murder trigger similar warnings.

 

Despite these warnings, Apple Intelligence still offers suggestions for problematic sentences. In one test, replacing “sh***y” with “crappy” removed the warning, but the AI’s suggestion remained unchanged.

 

Apple has been contacted for further details on the specific topics and words that Writing Tools is not trained to handle. Any updates from the company will be incorporated into this story.

 

Apple’s approach seems to be a precautionary measure to avoid controversy. By limiting the AI’s ability to tackle certain words, topics, and tones, Apple aims to prevent its technology from generating contentious content.

 

This cautious stance is not entirely new for Apple; it took years to remove a block on swear words from the keyboard’s autocorrect suggestions. It wasn’t until the release of iOS 17 that Apple introduced an autocorrect feature capable of learning user-specific swears.

 

With Apple Intelligence, it appears the company is once again erring on the side of caution to avoid potential regulatory scrutiny and to ensure the responsible use of its AI technology.

Leave a Reply

Your email address will not be published. Required fields are marked *