Agent

 

Rational Agent

Agent

An entity that perceives and acts

Anything that can be viewed as perceiving its environment through sensors; acting upon that environment through actuators.

Agent function

A function from percept histories to actions, i.e., $f: P^* \rightarrow A$

Agent program

Runs on the physical architecture to perform $f$

agent = architecture + program

Rational behavior

doing the “right thing” => expected to achieve best outcome, maximize agent success

Performance measure

Objective criterion for measuring success of an agent’s behavior

Rational Agent

For each possible percept sequence, select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Task Environment

  • Performance measure
  • Environment
  • Actuators
  • Sensors

Properties of Task Environments

Fully observable (vs. partially observable)

Sensors provide access to the complete state of the environment at each point in time

Deterministic (vs. stochastic)

The next state of the environment is completely determined by the current state and the action executed by the agent.

Episodic (vs. sequential)

The choice of current action does not depend on actions in the past episodes.

Alternatively, current action does not affect future decisions.

Static (vs. dynamic)

The environment is unchanged while an agent is deliberating.

Discrete (vs. continuous)

A finite number of distinct states, percepts, and actions.

Single agent (vs. multi-agent)

An agent operating by itself in an environment.

Agent Functions and Programs

An agent is completely specified by the agent function mapping percept sequences to actions

One agent function (or a small equivalence class) is rational

Aim: Find a way to implement the rational agent function concisely

Table-Lookup Agent

Good for the simple situation.

Drawbacks

  • Huge table to store
  • Take a long time to build the table
  • No autonomy: impossible to learn all correct table entries from experience
  • No guidance on filling in the correct table entries

Simple Reflex Agent

Only acts when observes a percept.

Updates state based on percept only. (Deterministic, doesn’t care about the past)

Model-Based Reflex Agent

Only acts when observes a percept

state is updated based on percept, current state, most recent action, and model of world.

Goal-Based Agent

Has goals, acts to achieve them.

state is updated based on percept, current state, most recent action, and model of world.

Utility-Based Agent

Has utility function, acts to maximize it.

state is updated based on percept, current state, most recent action, and model of world.

Learning Agent

Updates utility function according to the performance standard.