https://themodernsoftware.dev/
Week1 Introduction to Coding LLMs and AI Development
- LLMs (large language models) are autoregressive models for next-token prediction
- Training Process
- Self-supervised pretraining
- Supervised finetuning(指令遵循)
- Preferencing tuning(RL)
- Prompting
- Zero-shot prompting
- K-short prompting
- in-context learning
- Ideal for tasks that don’t have too many reasoning steps
- Chain-of-Thought Prompting
- Show reasoning steps for a given task(Multi-shot CoT, Zero-Shot CoT)
- Self-consistency Prompting
- 多次推理,汇总最常见结果(aka bagging)
- Tool Use
- 与外部系统进行交互,减少幻觉
- Retrieval Agumented Generation(RAG)
- Reflexion(反思)
- 类似:Now critique your answer. Was it correct? If not, explain why and try again.
some Terminology:
- System Prompt:
- First message provided to LLM (usually not seen by end user)
- Provides persona, rules about LLM output, style
- User Prompt:
- The actual ask or instruction from a human
- Assistan:
- LLM generates
最佳实践:
- Use our prompt improver to optimize your prompts - Claude Docs
- Clear prompting
- role prompting
- Prompts should be formatted with structure
- Be explicit about what you want (languages, tech stacks, libraries, constraints)
- Decompose tasks(分解任务)