AI-Powered Search

AI-Powered Search

AI-Powered Search
Автор: Grainger Trey, Irwin Max, Turnbull Doug
Дата выхода: 2025
Издательство: Manning Publications Co.
Количество страниц: 520
Размер файла: 6.7 MB
Тип файла: PDF
Добавил: codelibs
 Проверить на вирусы  Дополнительные материалы 

AI-Powered Search....1

brief contents....6

contents....7

foreword....15

preface....17

acknowledgments....19

about this book....21

Who should read this book....22

How this book is organized: A road map....22

About the code....24

liveBook discussion forum....26

Other online resources....26

about the authors....28

about the cover illustration....30

Part 1 Modern search relevance....32

1 Introducing AI-powered search....34

1.1 What is AI-powered search?....36

1.2 Understanding user intent....39

1.2.1 What is a search engine?....40

1.2.2 What do recommendation engines offer?....41

1.2.3 The personalization spectrum between search and recommendations....42

1.2.4 Semantic search and knowledge graphs....43

1.2.5 Understanding the dimensions of user intent....44

1.3 How does AI-powered search work?....46

1.3.1 The core search foundation....47

1.3.2 Reflected intelligence through feedback loops....47

1.3.3 Signals boosting, collaborative filtering, and learning to rank....48

1.3.4 Content and domain intelligence....50

1.3.5 Generative AI and retrieval augmented generation....51

1.3.6 Curated vs. black-box AI....52

1.3.7 Architecture for an AI-powered search engine....53

Summary....54

2 Working with natural language....56

2.1 The myth of unstructured data....57

2.1.1 Types of unstructured data....58

2.1.2 Data types in traditional structured databases....59

2.1.3 Joins, fuzzy joins, and entity resolution in unstructured data....60

2.2 The structure of natural language....64

2.3 Distributional semantics and embeddings....66

2.4 Modeling domain-specific knowledge....71

2.5 Challenges in natural language understanding for search....74

2.5.1 The challenge of ambiguity (polysemy)....74

2.5.2 The challenge of understanding context....75

2.5.3 The challenge of personalization....75

2.5.4 Challenges interpreting queries vs. documents....76

2.5.5 Challenges interpreting query intent....76

2.6 Content + signals: The fuel powering AI-powered search....77

Summary....79

3 Ranking and content-based relevance....80

3.1 Scoring query and document vectors with cosine similarity....81

3.1.1 Mapping text to vectors....82

3.1.2 Calculating similarity between dense vector representations....83

3.1.3 Calculating similarity between sparse vector representations....84

3.1.4 Term frequency: Measuring how well documents match a term....86

3.1.5 Inverse document frequency: Measuring the importance of a term in the query....91

3.1.6 TF-IDF: A balanced weighting metric for text-based relevance....92

3.2 Controlling the relevance calculation....93

3.2.1 BM25: The industry standard default text-similarity algorithm....93

3.2.2 Functions, functions, everywhere!....98

3.2.3 Choosing multiplicative vs. additive boosting for relevance functions....101

3.2.4 Differentiating matching (filtering) vs. ranking (scoring) of documents....103

3.2.5 Logical matching: Weighting the relationships between terms in a query....103

3.2.6 Separating concerns: Filtering vs. scoring....105

3.3 Implementing user and domain-specific relevance ranking....107

Summary....108

4 Crowdsourced relevance....109

4.1 Working with user signals....110

4.1.1 Content vs. signals vs. models....110

4.1.2 Setting up our product and signals datasets (RetroTech)....112

4.1.3 Exploring the signals data....115

4.1.4 Modeling users, sessions, and requests....117

4.2 Introducing reflected intelligence....118

4.2.1 What is reflected intelligence?....118

4.2.2 Popularized relevance through signals boosting....120

4.2.3 Personalized relevance through collaborative filtering....125

4.2.4 Generalized relevance through learning to rank....127

4.2.5 Other reflected intelligence models....128

4.2.6 Crowdsourcing from content....129

Summary....130

Part 2 Learning domain-specific intent....132

5 Knowledge graph learning....134

5.1 Working with knowledge graphs....135

5.2 Using our search engine as a knowledge graph....137

5.3 Automatically extracting knowledge graphs from content....137

5.3.1 Extracting arbitrary relationships from text....138

5.3.2 Extracting hyponyms and hypernyms from text....140

5.4 Learning intent by traversing semantic knowledge graphs....143

5.4.1 What is a semantic knowledge graph?....143

5.4.2 Indexing the datasets....144

5.4.3 Structure of an SKG....145

5.4.4 Calculating edge weights to measure the relatedness of nodes....147

5.4.5 Using SKGs for query expansion....151

5.4.6 Using SKGs for content-based recommendations....155

5.4.7 Using SKGs to model arbitrary relationships....158

5.5 Using knowledge graphs for semantic search....160

Summary....161

6 Using context to learn domain-specific language....162

6.1 Classifying query intent....163

6.2 Query-sense disambiguation....166

6.3 Learning related phrases from query signals....171

6.3.1 Mining query logs for related queries....172

6.3.2 Finding related queries through product interactions....177

6.4 Phrase detection from user signals....181

6.4.1 Treating queries as entities....182

6.4.2 Extracting entities from more complex queries....183

6.5 Misspellings and alternative representations....183

6.5.1 Learning spelling corrections from documents....184

6.5.2 Learning spelling corrections from user signals....185

6.6 Pulling it all together....190

Summary....190

7 Interpreting query intent through semantic search....192

7.1 The mechanics of query interpretation....193

7.2 Indexing and searching on a local reviews dataset....195

7.3 An end-to-end semantic search example....199

7.4 Query interpretation pipelines....201

7.4.1 Parsing a query for semantic search....201

7.4.2 Enriching a query for semantic search....210

7.4.3 Sparse lexical and expansion models....215

7.4.4 Transforming a query for semantic search....218

7.4.5 Searching with a semantically enhanced query....219

Summary....221

Part 3 Reflected intelligence....222

8 Signals-boosting models....224

8.1 Basic signals boosting....225

8.2 Normalizing signals....226

8.3 Fighting signal spam....228

8.3.1 Using signal spam to manipulate search results....229

8.3.2 Combating signal spam through user-based filtering....231

8.4 Combining multiple signal types....233

8.5 Time decays and short-lived signals....235

8.5.1 Handling time-insensitive signals....236

8.5.2 Handling time-sensitive signals....236

8.6 Index-time vs. query-time boosting: Balancing scale vs. flexibility....239

8.6.1 Tradeoffs when using query-time boosting....239

8.6.2 Implementing index-time signals boosting....241

8.6.3 Tradeoffs when implementing index-time boosting....243

Summary....246

9 Personalized search....247

9.1 Personalized search vs. recommendations....248

9.1.1 Personalized queries....250

9.1.2 User-guided recommendations....251

9.2 Recommendation algorithm approaches....251

9.2.1 Content-based recommenders....251

9.2.2 Behavior-based recommenders....253

9.2.3 Multimodal recommenders....254

9.3 Implementing collaborative filtering....255

9.3.1 Learning latent user and item features through matrix factorization....255

9.3.2 Implementing collaborative filtering with Alternating Least Squares....259

9.3.3 Personalizing search results with recommendation boosting....265

9.4 Personalizing search using content-based embeddings....269

9.4.1 Generating content-based latent features....269

9.4.2 Implementing categorical guardrails for personalization....272

9.4.3 Integrating embedding-based personalization into search results....277

9.5 Challenges with personalizing search results....282

Summary....283

10 Learning to rank for generalizable search relevance....285

10.1 What is LTR?....286

10.1.1 Moving beyond manual relevance tuning....286

10.1.2 Implementing LTR in the real world....287

10.2 Step 1: A judgment list, starting with the training data....290

10.3 Step 2: Feature logging and engineering....291

10.3.1 Storing features in a modern search engine....292

10.3.2 Logging features from our search engine corpus....293

10.4 Step 3: Transforming LTR to a traditional machine learning problem....295

10.4.1 SVMrank: Transforming ranking to binary classification....296

10.4.2 Transforming our LTR training task to binary classification....298

10.5 Step 4: Training (and testing!) the model....306

10.5.1 Turning a separating hyperplane’s vector into a scoring function....306

10.5.2 Taking the model for a test drive....307

10.5.3 Validating the model....308

10.6 Steps 5 and 6: Upload a model and search....310

10.6.1 Deploying and using the LTR model....310

10.6.2 A note on LTR performance....313

10.7 Rinse and repeat....314

Summary....314

11 Automating learning to rank with click models....316

11.1 (Re)creating judgment lists from signals....318

11.1.1 Generating implicit, probabilistic judgments from signals....318

11.1.2 Training an LTR model using probabilistic judgments....320

11.1.3 Click-Through Rate: Your first click model....321

11.1.4 Common biases in judgments....325

11.2 Overcoming position bias....326

11.2.1 Defining position bias....326

11.2.2 Position bias in RetroTech data....326

11.2.3 Simplified dynamic Bayesian network: A click model that overcomes position bias....328

11.3 Handling confidence bias: Not upending your model due to a few lucky clicks....333

11.3.1 The low-confidence problem in click data....334

11.3.2 Using a beta prior to model confidence probabilistically....335

11.4 Exploring your training data in an LTR system....342

Summary....343

12 Overcoming ranking bias through active learning....345

12.1 Our automated LTR engine in a few lines of code....348

12.1.1 Turning clicks into training data (chapter 11 in one line of code)....348

12.1.2 Model training and evaluation in a few function calls....349

12.2 A/B testing a new model....351

12.2.1 Taking a better model out for a test drive....351

12.2.2 Defining an A/B test in the context of automated LTR....352

12.2.3 Graduating the better model into an A/B test....353

12.2.4 When “good” models go bad: What we can learn from a failed A/B test....354

12.3 Overcoming presentation bias: Knowing when to explore vs. exploit....356

12.3.1 Presentation bias in the RetroTech training data....357

12.3.2 Beyond the ad hoc: Thoughtfully exploring with a Gaussian process....358

12.3.3 Examining the outcome of our explorations....365

12.4 Exploit, explore, gather, rinse, repeat: A robust automated LTR loop....367

Summary....369

Part 4 The search frontier....370

13 Semantic search with dense vectors....372

13.1 Representation of meaning through embeddings....373

13.2 Search using dense vectors....374

13.2.1 A brief refresher on sparse vectors....375

13.2.2 A conceptual dense vector search engine....375

13.3 Getting text embeddings by using a Transformer encoder....379

13.3.1 What is a Transformer?....379

13.3.2 Openly available pretrained Transformer models....382

13.4 Applying Transformers to search....382

13.4.1 Using the Stack Exchange outdoors dataset....383

13.4.2 Fine-tuning and the Semantic Text Similarity Benchmark....385

13.4.3 Introducing the SBERT Transformer library....386

13.5 Natural language autocomplete....389

13.5.1 Getting noun and verb phrases for our nearest-neighbor vocabulary....390

13.5.2 Getting embeddings....392

13.5.3 ANN search....396

13.5.4 ANN index implementation....398

13.6 Semantic search with LLM embeddings....400

13.6.1 Getting titles and their embeddings....401

13.6.2 Creating and searching the nearest-neighbor index....402

13.7 Quantization and representation learning for more efficient vector search....405

13.7.1 Scalar quantization....407

13.7.2 Binary quantization....412

13.7.3 Product quantization....414

13.7.4 Matryoshka Representation Learning....417

13.7.5 Combining multiple vector search optimization approaches....420

13.8 Cross-encoders vs. bi-encoders....422

Summary....426

14 Question answering with a fine-tuned large language model....427

14.1 Question-answering overview....428

14.1.1 How a question-answering model works....428

14.1.2 The retriever-reader pattern....433

14.2 Constructing a question-answering training dataset....436

14.2.1 Gathering and cleaning a question-answering dataset....437

14.2.2 Creating the silver set: Automatically labeling data from a pretrained model....438

14.2.3 Human-in-the-loop training: Manually correcting the silver set to produce a golden set....441

14.2.4 Formatting the golden set for training, testing, and validation....442

14.3 Fine-tuning the question-answering model....444

14.3.1 Tokenizing and shaping our labeled data....445

14.3.2 Configuring the model trainer....447

14.3.3 Performing training and evaluating loss....449

14.3.4 Holdout validation and confirmation....449

14.4 Building the reader with the new fine-tuned model....450

14.5 Incorporating the retriever: Using the question-answering model with the search engine....452

14.5.1 Step 1: Querying the retriever....452

14.5.2 Step 2: Inferring answers from the reader model....453

14.5.3 Step 3: Reranking the answers....454

14.5.4 Step 4: Returning results by combining the retriever, reader, and reranker....454

Summary....456

15 Foundation models and emerging search paradigms....457

15.1 Understanding foundation models....458

15.1.1 What qualifies as a foundation model?....459

15.1.2 Training vs. fine-tuning vs. prompting....459

15.2 Generative search....462

15.2.1 Retrieval augmented generation....464

15.2.2 Results summarization using foundation models....466

15.2.3 Data generation using foundation models....469

15.2.4 Evaluating generative output....472

15.2.5 Constructing your own metric....474

15.2.6 Algorithmic prompt optimization....476

15.3 Multimodal search....478

15.3.1 Common modes for multimodal search....478

15.3.2 Implementing multimodal search....480

15.4 Other emerging AI-powered search paradigms....485

15.4.1 Conversational and contextual search....485

15.4.2 Agent-based search....487

15.5 Hybrid search....487

15.5.1 Reciprocal rank fusion....488

15.5.2 Other hybrid search algorithms....494

15.6 Convergence of contextual technologies....496

15.7 All the above, please!....497

Summary....498

appendix A Running the code examples....500

A.1 Overall structure of code examples....500

A.2 Pulling the source code....501

A.3 Building and running the code....501

A.4 Working with Jupyter....503

A.5 Working with Docker....504

appendix B Supported search engines and vector databases....505

B.1 Supported engines....505

B.2 Swapping out the engine....505

B.3 The engine and collection abstractions....506

B.4 Adding support for additional engines....507

index....510

A....510

B....510

C....511

D....511

E....512

F....512

G....512

H....513

I....513

J....513

K....513

L....513

M....514

N....514

O....515

P....515

Q....515

R....516

S....516

T....518

U....518

V....518

W....519

AI-Powered Search - back....520

Delivering effective search is one of the biggest challenges you can face as an engineer. AI-Powered Search is an in-depth guide to building intelligent search systems you can be proud of. It covers the critical tools you need to automate ongoing relevance improvements within your search applications.

Inside you’ll learn modern, data-science-driven search techniques like:

  • Semantic search using dense vector embeddings from foundation models
  • Retrieval augmented generation (RAG)
  • Question answering and summarization combining search and LLMs
  • Fine-tuning transformer-based LLMs
  • Personalized search based on user signals and vector embeddings
  • Collecting user behavioral signals and building signals boosting models
  • Semantic knowledge graphs for domain-specific learning
  • Semantic query parsing, query-sense disambiguation, and query intent classification
  • Implementing machine-learned ranking models (Learning to Rank)
  • Building click models to automate machine-learned ranking
  • Generative search, hybrid search, multimodal search, and the search frontier

AI-Powered Search will help you build the kind of highly intelligent search applications demanded by modern users. Whether you’re enhancing your existing search engine or building from scratch, you’ll learn how to deliver an

AI-powered service that can continuously learn from every content update, user interaction, and the hidden semantic relationships in your content. You’ll learn both how to enhance your AI systems with search and how to integrate large language models (LLMs) and other foundation models to massively accelerate the capabilities of your search technology.Foreword by Grant Ingersoll.

About the technology

Modern search is more than keyword matching. Much, much more. Search that learns from user interactions, interprets intent, and takes advantage of AI tools like large language models (LLMs) can deliver highly targeted and relevant results. This book shows you how to up your search game using state-of-the-art AI algorithms, techniques, and tools.

About the book

AI-Powered Search teaches you to create a search that understands natural language and improves automatically the more it is used. As you work through dozens of interesting and relevant examples, you’ll learn powerful AI-based techniques like semantic search on embeddings, question answering powered by LLMs, real-time personalization, and Retrieval Augmented Generation (RAG).

What's inside

  • Sparse lexical and embedding-based semantic search
  • Question answering, RAG, and summarization using LLMs
  • Personalized search and signals boosting models
  • Learning to Rank, multimodal, and hybrid search

About the reader

For software developers and data scientists familiar with the basics of search engine technology.


Похожее:

Список отзывов:

Нет отзывов к книге.