Prompt Engineering for LLMs: The Art and Science of Building Large Language Model–Based Applications

Prompt Engineering for LLMs: The Art and Science of Building Large Language Model–Based Applications

Prompt Engineering for LLMs: The Art and Science of Building Large Language Model–Based Applications
Автор: Berryman John, Ziegler Albert
Дата выхода: 2025
Издательство: O’Reilly Media, Inc.
Количество страниц: 282
Размер файла: 3,6 МБ
Тип файла: PDF
Добавил: codelibs
 Проверить на вирусы

Copyright

Table of Contents

Preface

Who Is This Book For?

What You Will Learn

Conventions Used in This Book

O’Reilly Online Learning

How to Contact Us

Acknowledgments

From John

From Albert

Part I. Foundations

Chapter 1. Introduction to Prompt Engineering

LLMs Are Magic

Language Models: How Did We Get Here?

Early Language Models

GPT Enters the Scene

Prompt Engineering

Conclusion

Chapter 2. Understanding LLMs

What Are LLMs?

Completing a Document

Human Thought Versus LLM Processing

Hallucinations

How LLMs See the World

Difference 1: LLMs Use Deterministic Tokenizers

Difference 2: LLMs Can’t Slow Down and Examine Letters

Difference 3: LLMs See Text Differently

Counting Tokens

One Token at a Time

Auto-Regressive Models

Patterns and Repetitions

Temperature and Probabilities

The Transformer Architecture

Conclusion

Chapter 3. Moving to Chat

Reinforcement Learning from Human Feedback

The Process of Building an RLHF Model

Keeping LLMs Honest

Avoiding Idiosyncratic Behavior

RLHF Packs a Lot of Bang for the Buck

Beware of the Alignment Tax

Moving from Instruct to Chat

Instruct Models

Chat Models

The Changing API

Chat Completion API

Comparing Chat with Completion

Moving Beyond Chat to Tools

Prompt Engineering as Playwriting

Conclusion

Chapter 4. Designing LLM Applications

The Anatomy of the Loop

The User’s Problem

Converting the User’s Problem to the Model Domain

Using the LLM to Complete the Prompt

Transforming Back to User Domain

Zooming In to the Feedforward Pass

Building the Basic Feedforward Pass

Exploring the Complexity of the Loop

Evaluating LLM Application Quality

Offline Evaluation

Online Evaluation

Conclusion

Part II. Core Techniques

Chapter 5. Prompt Content

Sources of Content

Static Content

Clarifying Your Question

Few-Shot Prompting

Dynamic Content

Finding Dynamic Context

Retrieval-Augmented Generation

Summarization

Conclusion

Chapter 6. Assembling the Prompt

Anatomy of the Ideal Prompt

What Kind of Document?

The Advice Conversation

The Analytic Report

The Structured Document

Formatting Snippets

More on Inertness

Formatting Few-Shot Examples

Elastic Snippets

Relationships Among Prompt Elements

Position

Importance

Dependency

Putting It All Together

Conclusion

Chapter 7. Taming the Model

Anatomy of the Ideal Completion

The Preamble

Recognizable Start and End

Postscript

Beyond the Text: Logprobs

How Good Is the Completion?

LLMs for Classification

Critical Points in the Prompt

Choosing the Model

Conclusion

Part III. An Expert of the Craft

Chapter 8. Conversational Agency

Tool Usage

LLMs Trained for Tool Usage

Guidelines for Tool Definitions

Reasoning

Chain of Thought

ReAct: Iterative Reasoning and Action

Beyond ReAct

Context for Task-Based Interactions

Sources for Context

Selecting and Organizing Context

Building a Conversational Agent

Managing Conversations

User Experience

Conclusion

Chapter 9. LLM Workflows

Would a Conversational Agent Suffice?

Basic LLM Workflows

Tasks

Assembling the Workflow

Example Workflow: Shopify Plug-in Marketing

Advanced LLM Workflows

Allowing an LLM Agent to Drive the Workflow

Stateful Task Agents

Roles and Delegation

Conclusion

Chapter 10. Evaluating LLM Applications

What Are We Even Testing?

Offline Evaluation

Example Suites

Finding Samples

Evaluating Solutions

SOMA Assessment

Online Evaluation

A/B Testing

Metrics

Conclusion

Chapter 11. Looking Ahead

Multimodality

User Experience and User Interface

Intelligence

Conclusion

Index

About the Authors

Colophon

Large language models (LLMs) are revolutionizing the world, promising to automate tasks and solve complex problems. A new generation of software applications are using these models as building blocks to unlock new potential in almost every domain, but reliably accessing these capabilities requires new skills. This book will teach you the art and science of prompt engineering-the key to unlocking the true potential of LLMs.

Industry experts John Berryman and Albert Ziegler share how to communicate effectively with AI, transforming your ideas into a language model-friendly format. By learning both the philosophical foundation and practical techniques, you'll be equipped with the knowledge and confidence to build the next generation of LLM-powered applications.

  • Understand LLM architecture and learn how to best interact with it
  • Design a complete prompt-crafting strategy for an application
  • Gather, triage, and present context elements to make an efficient prompt
  • Master specific prompt-crafting techniques like few-shot learning, chain-of-thought prompting, and RAG

Похожее:

Список отзывов:

Нет отзывов к книге.