Original): ABHISHEK GAUTAM
Originally published in the direction of artificial intelligence.
How do LLMS understand? Look at the “thinking” of the mind of AI
This is a question in the heart of the AI revolution. When you monitor with a large language model (LLM) and determines the plan step by step, solves a complex problem or generates a creative strategy, right? thinking? Are we witnessing a real spark of digital consciousness or are we captivated by an extremely sophisticated illusion?
In this article, he examines the possibilities of reasoning large language models (LLM) and introduces large models of reasoning (LRM), which increase reasoning through structured processes known as “thoughts.” Scientists have developed a “cognitive gym” of complex logical puzzles in order to systematically study these models, revealing significant strengths and weaknesses in how they deal with various difficulties. Discoveries indicate that while LRMs work well on medium complexity, they struggle with high complexity, illustrating the failure of matching patterns, which leads to the breakdown of reasoning. In general, this relationship emphasizes both the actual possibilities and the limits of the reasoning of artificial intelligence when used in practical scenarios.
Read the full blog for free on the medium.
Published via AI

















