ChatGPT was launched last year. While some heap praises on the AI chatbot‘s ability to deliver human-like responses, others target both OpenAI and ChatGPT, accusing them of bias. The company has now addressed the issue explaining how ChatGPT’s behaviour is shaped and how the company plans to improve ChatGPT’s default behaviour.
“Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address,” the company said in a blog post.
OpenAI also said that it has also seen “a few misconceptions about how our systems and policies work together to shape the outputs you get from ChatGPT.”

“Biases are bugs”
In the blog, OpenAI acknowledged that many are rightly worried about biases in the design and impact of AI systems. It added that the AI model is trained by the data available and inputs by the public who use or are affected by systems like ChatGPT.
“Our guidelines are explicit that reviewers should not favour any political group. Biases that nevertheless may emerge from the process described above are bugs, not features,” the startup said. It further said that it is the company’s belief that technology companies must be accountable for producing policies that stand up to scrutiny.
“We are committed to robustly addressing this issue and being transparent about both our intentions and our progress,” it noted.
OpenAI said that it is working to improve the clarity of these guidelines and, based on the learnings from the ChatGPT launch, it will provide clearer instructions to reviewers about potential pitfalls and challenges tied to bias, as well as controversial figures and themes.
As a part of its transparency initiatives, OpenAI is also working to share aggregated demographic information about the reviewers “in a way that doesn’t violate privacy rules and norms,” because this is an additional source of potential bias in system outputs.
The company is also researching on how to make the fine-tuning process more understandable and controllable.

By Master

Leave a Reply

Your email address will not be published. Required fields are marked *

satta king tw