Python code for Artificial Intelligence. Foundations of Computational Agents. 3 Ed

Python code for Artificial Intelligence. Foundations of Computational Agents. 3 Ed

Python code for Artificial Intelligence. Foundations of Computational Agents. 3 Ed
Автор: Mackworth Alan K, Poole David L
Дата выхода: 2024
Издательство: Самиздат
Количество страниц: 390
Размер файла: 1,9 МБ
Тип файла: PDF
Добавил: codelibs
 Проверить на вирусы

1 Python for Artificial Intelligence 9  
1.1 Why Python? 9  
1.2 Getting Python 10  
1.3 Running Python 10  
1.4 Pitfalls 11  
1.5 Features of Python 11  
1.5.1 f-strings 11  
1.5.2 Lists, Tuples, Sets, Dictionaries and Comprehensions 12  
1.5.3 Functions as first-class objects 13  
1.5.4 Generators 14  
1.6 Useful Libraries 16  
1.6.1 Timing Code 16  
1.6.2 Plotting: Matplotlib 16  
1.7 Utilities 18  
1.7.1 Display 18  
1.7.2 Argmax 19  
1.7.3 Probability 20  
1.8 Testing Code 21  

2 Agent Architectures and Hierarchical Control 25  
2.1 Representing Agents and Environments 25  
2.2 Paper buying agent and environment 27  
2.2.1 The Environment 27  
2.2.2 The Agent 29  
2.2.3 Plotting 29  
2.3 Hierarchical Controller 31  
2.3.1 Environment 31  
2.3.2 Body 32  
2.3.3 Middle Layer 34  
2.3.4 Top Layer 35  
2.3.5 Plotting 36  

3 Searching for Solutions 41  
3.1 Representing Search Problems 41  
3.1.1 Explicit Representation of Search Graph 43  
3.1.2 Paths 45  
3.1.3 Example Search Problems 47  
3.2 Generic Searcher and Variants 53  
3.2.1 Searcher 53  
3.2.2 GUI for Tracing Search 55  
3.2.3 Frontier as a Priority Queue 59  
3.2.4 A* Search 60  
3.2.5 Multiple Path Pruning 62  
3.3 Branch-and-bound Search 64  

4 Reasoning with Constraints 69  
4.1 Constraint Satisfaction Problems 69  
4.1.1 Variables 69  
4.1.2 Constraints 70  
4.1.3 CSPs 71  
4.1.4 Examples 74  
4.2 A Simple Depth-first Solver 83  
4.3 Converting CSPs to Search Problems 84  
4.4 Consistency Algorithms 86  
4.4.1 Direct Implementation of Domain Splitting 89  
4.4.2 Consistency GUI 91  
4.4.3 Domain Splitting as an interface to graph searching 93  
4.5 Solving CSPs using Stochastic Local Search 95  
4.5.1 Any-conflict 97  
4.5.2 Two-Stage Choice 98  
4.5.3 Updatable Priority Queues 101  
4.5.4 Plotting Run-Time Distributions 102  
4.5.5 Testing 103  
4.6 Discrete Optimization 105  
4.6.1 Branch-and-bound Search 106  

5 Propositions and Inference 109  
5.1 Representing Knowledge Bases 109  
5.2 Bottom-up Proofs (with askables) 112  
5.3 Top-down Proofs (with askables) 114  
5.4 Debugging and Explanation 115  
5.5 Assumables 119  
5.6 Negation-as-failure 122  

6 Deterministic Planning 125  
6.1 Representing Actions and Planning Problems 125  
6.1.1 Robot Delivery Domain 126  
6.1.2 Blocks World 128  
6.2 Forward Planning 130  
6.2.1 Defining Heuristics for a Planner 132  
6.3 Regression Planning 135  
6.3.1 Defining Heuristics for a Regression Planner 137  
6.4 Planning as a CSP 138  
6.5 Partial-Order Planning 142  

7 Supervised Machine Learning 149  
7.1 Representations of Data and Predictions 150  
7.1.1 Creating Boolean Conditions from Features 153  
7.1.2 Evaluating Predictions 155  
7.1.3 Creating Test and Training Sets 157  
7.1.4 Importing Data From File 157  
7.1.5 Augmented Features 160  
7.2 Generic Learner Interface 162  
7.3 Learning With No Input Features 163  
7.3.1 Evaluation 165  
7.4 Decision Tree Learning 167  
7.5 Cross Validation and Parameter Tuning 171  
7.6 Linear Regression and Classification 175  
7.7 Boosting 181  
7.7.1 Gradient Tree Boosting 184  

8 Neural Networks and Deep Learning 187  
8.1 Layers 187  
8.1.1 Linear Layer 188  
8.1.2 ReLU Layer 190  
8.1.3 Sigmoid Layer 190  
8.2 Feedforward Networks 191  
8.3 Improved Optimization 193  
8.3.1 Momentum 193  
8.3.2 RMS-Prop 193  
8.4 Dropout 194  
8.4.1 Examples 195  

9 Reasoning with Uncertainty 201
9 Reasoning with Uncertainty 201  
9.1 Representing Probabilistic Models 201  
9.2 Representing Factors 201  
9.3 Conditional Probability Distributions 203  
9.3.1 Logistic Regression 203  
9.3.2 Noisy-or 204  
9.3.3 Tabular Factors and Prob 205  
9.3.4 Decision Tree Representations of Factors 206  
9.4 Graphical Models 208  
9.4.1 Showing Belief Networks 209  
9.4.2 Example Belief Networks 210  
9.5 Inference Methods 216  
9.5.1 Showing Posterior Distributions 217  
9.6 Naive Search 218  
9.7 Recursive Conditioning 220  
9.8 Variable Elimination 224  
9.9 Stochastic Simulation 227  
9.9.1 Sampling from a Discrete Distribution 227  
9.9.2 Sampling Methods for Belief Network Inference 229  
9.9.3 Rejection Sampling 229  
9.9.4 Likelihood Weighting 230  
9.9.5 Particle Filtering 231  
9.9.6 Examples 233  
9.9.7 Gibbs Sampling 234  
9.9.8 Plotting Behavior of Stochastic Simulators 236  
9.10 Hidden Markov Models 238  
9.10.1 Exact Filtering for HMMs 240  
9.10.2 Localization 241  
9.10.3 Particle Filtering for HMMs 244  
9.10.4 Generating Examples 246  
9.11 Dynamic Belief Networks 247  
9.11.1 Representing Dynamic Belief Networks 247  
9.11.2 Unrolling DBNs 250  
9.11.3 DBN Filtering 251  

10 Learning with Uncertainty 253  
10.1 Bayesian Learning 253  
10.2 K-means 257  
10.3 EM 261  

11 Causality 267  
11.1 Do Questions 267  
11.2 Counterfactual Example 269  
11.2.1 Firing Squad Example 271  

12 Planning with Uncertainty 275  
12.1 Decision Networks 275  
12.1.1 Example Decision Networks 277  
12.1.2 Decision Functions 283  
12.1.3 Recursive Conditioning for Decision Networks 284  
12.1.4 Variable Elimination for Decision Networks 287  
12.2 Markov Decision Processes 289  
12.2.1 Problem Domains 291  
12.2.2 Value Iteration 299  
12.2.3 Value Iteration GUI for Grid Domains 300  
12.2.4 Asynchronous Value Iteration 302  

13 Reinforcement Learning 307  
13.1 Representing Agents and Environments 307  
13.1.1 Environments 307  
13.1.2 Agents 308  
13.1.3 Simulating an Environment-Agent Interaction 309  
13.1.4 Party Environment 310  
13.1.5 Environment from a Problem Domain 311  
13.1.6 Monster Game Environment 312  
13.2 Q Learning 315  
13.2.1 Exploration Strategies 317  
13.2.2 Testing Q-learning 318  
13.3 Q-learning with Experience Replay 320  
13.4 Stochastic Policy Learning Agent 322  
13.5 Model-based Reinforcement Learner 324  
13.6 Reinforcement Learning with Features 327  
13.6.1 Representing Features 328  
13.6.2 Feature-based RL Learner 331  
13.7 GUI for RL 334  

14 Multiagent Systems 339  
14.1 Minimax 339  
14.1.1 Creating a Two-player Game 339  
14.1.2 Minimax and α-β Pruning 342  
14.2 Multiagent Learning 344  
14.2.1 Simulating Multiagent Interaction with an Environment 344  
14.2.2 Example Games 346  
14.2.3 Testing Games and Environments 347  

15 Individuals and Relations 349  
15.1 Representing Datalog and Logic Programs 349  
15.2 Unification 351  
15.3 Knowledge Bases 352  
15.4 Top-down Proof Procedure 354  
15.5 Logic Program Example 356  

16 Knowledge Graphs and Ontologies 359  
16.1 Triple Store 359  
16.2 Integrating Datalog and Triple Store 362  

17 Relational Learning 365  
17.1 Collaborative Filtering 365  
17.1.1 Plotting 369  
17.1.2 Loading Rating Sets from Files and Websites 372  
17.1.3 Ratings of Top Items and Users 373  
17.2 Relational Probabilistic Models 375  

18 Version History 381  
Bibliography 383  
Index 385  

AIPython contains runnable code for the book Artificial Intelligence, foundations of computational agents, 3rd Edition[Poole and Mackworth, 2023]. It has the following design goals

  • Readability is more important than efficiency, although the asymptotic complexity is not compromised. AIPython is not a replacement for welldesigned libraries, or optimized tools. Think of it like a model of an engine made of glass, so you can see the inner workings; don’t expect it to power a big truck, but it lets you see how a metal engine can power a truck.

  • It uses as few libraries as possible. A reader only needs to understand Python. Libraries hide details that we make explicit. The only library used is matplotlib for plotting and drawing


Похожее:

Список отзывов:

Нет отзывов к книге.