Author's): Manasha Pratima
Originally published in Towards Artificial Intelligence.
I didn't change models, tune them, or add new data. I just stopped trusting AI.
I didn't change models. I didn't refine it. I haven't added a single row of new training data. I just stopped trusting AI.

The article discusses the author's experience in dealing with AI hallucinations in production systems without changing the model itself. The author implemented a number of engineering checks, such as logging everything, validating the results, and allowing the model to express uncertainty, which collectively resulted in a significant reduction in hallucinations and errors. These practical steps highlighted the importance of enforcing reality and maintaining a critical stance towards model results rather than blindly trusting AI.
Read the entire blog for free on Medium.
Published via Towards AI
Take our 90+ year old Beginner to Advanced LLM Developer Certification: From project selection to implementing a working product, this is the most comprehensive and practical LLM course on the market!
Towards AI has published 'Building an LLM for Manufacturing' – our 470+ page guide to mastering the LLM with practical projects and expert insights!
Discover your dream career in AI with AI Jobs
Towards AI has created a job board tailored specifically to machine learning and data analytics jobs and skills. Our software finds current AI tasks every hour, tags them and categorizes them so they can be easily searched. Explore over 40,000 live job opportunities with Towards AI Jobs today!
Note: The content contains the views of the authors and not Towards AI.

















