ResearchBuzz: Firehose<p>CNBC: OpenAI will show how models do on hallucination tests and ‘illicit advice’. “OpenAI on Wednesday announced a new ‘safety evaluations hub,’ a webpage where it will publicly display artificial intelligence models’ safety results and how they perform on tests for hallucinations, jailbreaks and harmful content, such as ‘hateful content or illicit advice.'”</p><p><a href="https://rbfirehose.com/2025/05/17/cnbc-openai-will-show-how-models-do-on-hallucination-tests-and-illicit-advice/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/05/17/cnbc-openai-will-show-how-models-do-on-hallucination-tests-and-illicit-advice/</a></p>