Transitions
- Transition first from rule based learning to statistical learning
- Rise of semantic parsing: statistical models of parsing
- Then, moving from semantic parsing to large models—putting decision making and language modeling into the same bubble
Importance of LLMs
- They are simply better at understanding language inputs
- They can generate structured information (i.e. not just human language, JSONs, etc.)
- They can perform natural language “reasoning”—not just generate
(and natural language generation, abv)
- 1+3 gives you chain of thought reasoning
- 1+2 gives CALM, SayCan, and other types of RL text parsing in order to do stuff with robotics
- all three gives ReAct
ReAct
See ReAct
Problem: agents are not robust at all
https://github.com/ryoungj/ToolEmu