%run _help_reading.pyimport pandas as pddf = pd.read_csv('https://github.com/MrGeislinger/victorsothervector/raw/main/''data/reading/all_reading-clean.csv')book_name ="""The Enigma of Reason"""one_title = one_title_data(df, book_name)one_title_summary = get_summary_by_day(one_title)generate_plot(one_title_summary, book_name);
Figure 1: Reading done for The Enigma of Reason
Thoughts on The Enigma of Reason
Overview
Science If reason is what makes us human, why do we behave so irrationally? And if it is so useful, why didn’t it evolve in other animals? This groundbreaking account of the evolution of reason by two renowned cognitive scientists seeks to solve this double enigma. Reason, they argue, helps us justify our beliefs, convince others, and evaluate arguments. It makes it easier to cooperate and communicate and to live together in groups. Provocative, entertaining, and undeniably relevant, The Enigma of Reason will make many reasonable people rethink their beliefs. By Dan Sperber and Hugo Mercier.
We suck at ‘reasoning’ because we have the wrong definition of ‘reasoning’
Describes ‘argumentative theory of reasoning’ which proposes we developed a method of giving reasons as an argument (a discussion). This leads to say that ‘reasoning’ is a social act and works best when done as a group. In short, people give lazy and biased reasons, while their evaluations are objective & demanding.
I find this idea fascinating and likely to be closer to the truth in how we can ‘reason’ through logic. It actually describes the phenomenon where we get logic puzzles often wrong but then arrive to correct answers when working in a group. This would mean our ability of performing logic wasn’t the driving force in our evolution, but a side effect to give arguments as part of a group.
It might just be the time I’m reading this in, but my mind kept wandering over with the idea of ‘reasoning AI’. Strategies taken have mostly been either assuming logic & reason can be developed with structure & training (the model getting ‘smarter’) or using something like the concept of a ‘slow’ & ‘fast’ system of thinking (based Daniel Kahneman’s thesis in Thinking, Fast and Slow) by having the model perform ‘chain-of-thought reasoning’.
To me, a lot of the ‘argumentative theory of reasoning’ seems to present in many LLM models (especially without extensive modern fine-tuning) where the model will be confidently wrong and can end in a back-and-forth chat with a human spouting incorrect reasons. I wonder if the model was fine-tuned or even prompted with the expectation of a back-and-forth if it could do better on some of the tasks being consistent. This could emulate the idea of ‘social reasoning’ which Dan Sperber and Hugo Mercier presented evidence in their book.
This was a great read that makes me really think (ironically)! I actually was reminded of this book from a VSauce video The Future of Reasoning which leans heavily on this book’s ideas. I think it’s worth a watch even if you don’t want to read the book!