Some experience with thinking models: - for some questions they waste tokens...
Some experience with thinking models:
- for some questions they waste tokens profusely. A magazine's worth of thinking? What they got my anxiety or something?
- for unit tests they assume the code is correct, they assume there is a correct answer. But unit testing is all about finding crap code! Wrote a unit test when it should have said, "this code is buggy and I don't understand what it wants to do"
- It translates 1980's BASIC into python like a champ. Also OCR's like a champ, granted
Self-replies
Experience with Google's Gemini, which I'm testing after 2 years of ChatGPT and having been frustrated with Google's Bard (moron!) and the subsequent model's refusal to speak Esperanto (addressed! finally!)