The scientists are utilizing a way known as adversarial coaching to prevent ChatGPT from permitting people trick it into behaving poorly (called jailbreaking). This do the job pits various chatbots versus one another: just one chatbot plays the adversary and attacks another chatbot by building textual content to pressure it to buck its regular cons