Final review

Here is a list of thoughts, topics, and example questions, in no particular order:
  • The final exam will be comprehensive.


  • Please bring a calculator. Notes will not be allowed.


  • Please brush up on utility theory. There will be at least one question that requires it.


  • Please review the previous two midterm exams. I will probably repeat at least one question, maybe more.


  • Know what expected values are. Know what it means to marginalize away probability.


  • Know how to use minimax for maximizing utility in competitive (constant-sum) games.


  • Be able to reason intelligently about cooperative (non-constant-sum) games.


  • Know how alpha-beta pruning works. I might present you with a game tree and ask you which branches can be pruned.


  • Brush up on nuances of path-searching algorithms, such as breadth-first seach, uniform cost search, and A* search.


  • Describe how it would affect the computational complexity of uniform-cost search if you fail to dectect states that were previously visited?


  • Describe how you might implement uniform cost search for a continuous search space. Would the solution still be optimal? Near optimal? Why or why not?


  • Be aware that there are other search algorithms, like D* Lite, which provide additional nice properties, typically with significant algorithmic complexity.


  • Make sure you know what a heuristic is. Know what makes a heuristic "admissible" for A* search.


  • What is determinism? Are robots determinisitic? Are pseudo-random number generators deterministic? What about humans? What about quantum particles?


  • Name the three prominent types of autonomous learning that we outlined in class. Give examples of each.


  • Describe how learning and decision-making complement each other, or how they can be used together in the same system.


  • Know what Occam's razor suggests. Know what it does not suggest.


  • What is cross-validation?


  • Give pseudo-code for bootstrap aggregation (a.k.a. "bagging").


  • Why are decision trees well-suited for use in bagging ensembles?


  • Give pseudo-code for training a decision tree. (Random divisions will suffice.)


  • If you are training a single decision tree (not an ensemble of them), what is a better way to choose divisions?


  • What types of real-world data might create difficulties for training a decision tree? How would you make your decision tree training algorithm robust to these issues?


  • How does the computational complexity of an ensemble of decision trees differ from that of a single decision tree? For training? For evaluation?


  • How effective would an ensemble of decision trees be if you instantiated a new pseudo-random number generators for each tree, and seeded each one with the same value? Why?


  • How does classification differ from regression?


  • What is the role of dimensionality reduction in general intelligence?


  • Describe what is an autoencoder. What does it do? How does it work?


  • Give an example of a problem that requires learning/perception. Give an example of a problem that requires planning/decision-making. Give an example of a problem that requires both. Give an example of a problem that little of either.


  • Be able to explain how a genetic algorithm could be used to optimize the weights of a neural network.


  • Be able to explain how a genetic algorithm could be used to produce a plan for reaching a goal.


  • Be able to explain how a genetic algorithm could be modified to evolve tree structures instead of vectors. Which operations would be different? Which ones would be the same?


  • Be able to contrast the computational complexity of systematic path search techniques (such as uniform cost search or A* search) with genetic algorithms (or other swarm optimization techniques) for refining a plan.


  • Contrast tournament selection with fitness proportionate selection for a genetic algorithm.


  • Know how Q-learning works. Be able to describe the role of various terms in the update equation for Q-learning. Be able to describe why Q-learning is robust to various conditions.


  • Understand how deep Q-learning works. Know how it differs from Q-learning.


  • What AI techniques would be well-suited for addressing each of these challenges: Tic-tac-toe, determining when to drill for oil, maze running, CAPTCHA breaking, Checkers, Chess, Go, medical diagnosis, cheating at gambling, weather prediction, auto-stabilizing UAV, face recognition, anomaly detection, solving a Rubick's Cube, deciding when to mail coupons to Walmarket customers, packing boxes into a cargo crate, planning the fastest route to a destination on a map, planning a route to efficiently visit a large set of customers, guiding a robot to fold laundry, composing music.