ChatGPT makers OpenAI released new documentation on Wednesday that lays out all the desired behaviors of its popular chatbot, revealing how its software is designed to deal with “conflicting objectives or instructions.” Especially those that conflict with preset rules surrounding law-abiding behavior, personal privacy, and its approach to NSFW content.This level of transparency over its models is potentially one of the reasons OpenAI finds itself securing a place on the Mount Rushmore of artificial intelligence pioneers.However, being at the forefront of AI development, OpenAI has to decide which direction to head in next and how best to use its technology to “benefit humanity.” AI-controlled nanobot medical technology? How about Artificial astronauts to aid in our exploration of the universe?Or how about pornography, violence, racism, and profanity?ChatGP-F’n-T: A depraved new worldThere are several rules laid out in the documentation, with one insisting that its models “Don’t respond with NSFW content.” The rule is an implicit statement that its assistant should “Not serve content that’s Not Safe For Work (NSFW),” citing “Erotica, extreme gore, slurs, and unsolicited profanity” as primary examples. However, immediately following this is a commentary note that reads:”We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.”The OpenAI usage policies referenced in the commentary generally require users of the company’s software to comply with applicable laws, avoid using the service to harm themselves or others, and respect the developer’s safeguards.Get our in-depth reviews, helpful tips, great deals, and the biggest news stories delivered to your inbox.This loosely covers ground when it comes to generating deepfakes (a digital alteration that superimposes a person’s face or body into a pre-existing photo or video, typically for malicious reasons) or using AI to generate explicit images of others without permission or to harm, at least when it comes to sharing the results of these works, anyway.The troubling thing about the commentary note within OpenAI’s recently released documentation and how it relates to OpenAI’s usage policy is that it implies that making use of its services to engage in acts like this in the future could be A-OK, so long as it’s for “personal consumption.”It’s worth noting that the existence of this commentary doesn’t mean that features like this are guaranteed to follow. Simply that this is something that OpenAI are currently looking into. How deeply the company is looking into this application remains to be seen.OpenAI spokesperson Grace McGuire had little to share when it came to what OpenAI’s exploration of responsibly providing NSFW content would include.However, while speaking to WIRED, she was able to shine further light on the purpose of the documentation, stating that it was an effort to “Bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.”OutlookThe AI landscape is in a constant state of evolution as newer models are developed to solve problems and simplify processes. When we look at who is leading the charge, few can deny OpenAI’s prominence. Hailing from San Francisco, the Golden City startup has become the Golden Child of the AI era, proving itself as an industry figurehead following the release of the world’s most popular AI software, ChatGPT.While the world eagerly awaits the company’s GPT-5 update, which promises to bring its chatbot one step closer to a goal of achieving artificial general intelligence (AGI), OpenAI has been innovating across other mediums, also.In March of this year, OpenAI introduced the world to ChatGPT sister-model Sora, a text-to-video large language model (LLM) capable of generating ultra-realistic videos up to sixty seconds in length. Then, in April, the company showcased Voice Engine, an AI model capable of complete voice cloning based on nothing more than a 15-second audio sample.The latter of which is seen as another potentially dangerous application of AI, one that outweighs its benefits in relation to a wider public release. Meta has previously shown off a similar model called Voicebox, and similarly judged it to be too prone to misuse — even after developing a “highly effective classifier” to distinguish authentic speech for artificially generated samples.OpenAI’s exploration of NSFW content within its ChatGPT platform could prove just as prone to misuse, and in all fairness, likely will. Comments from OpenAI seem to indicate that the feature is only a part of discussion due to potential feedback from users. However, just because a portion of the company’s chatbot users would like to do something, doesn’t necessarily mean they’ll be granted the ability to do so.More from Laptop Mag
We will be happy to hear your thoughts