Agents and environment are a part of every AI-based intelligent system. An agent is placed in an environment where it observes the environment and makes its own decision. It observes the environment with the help of sensors and acts on the environment through actuators.
However, the term ‘bot’ – which is the abbreviation of the term ‘robot’ has become a common substitute for the term’ agent’. For example, we refer to a conversational agent by the term chatbot, a spam agent by the term spambot, a mail agent by the term mailbot, etc.
Content: Agents and Environment in Artificial Intelligence
- What is Agent?
- Types of Agents in AI
- What is Environment?
- Types of Environments in AI
- How Does it Work?
What is an Agent?
An agent is something that acts in an environment. If we talk about the earth’s environment, humans, animals, birds, robots, aeroplanes, gravity, wind, rain, companies, etc., are agents. Now, what we are interested in is how the agents act in the environment. So, we always judge an agent by its action.
When can we declare that the agent is Intelligent?
We can declare an agent intelligent if –
- If agents’ actions are appropriate for their circumstances and goals.
- Provided the actions of agents are flexible to the changing environment and also to changing goals.
- If the agent is learning from their past experiences.
- Suppose the agent can make an appropriate choice given its perceptual and computational limitations. The agent is unable to observe the state of the world directly. This is because an agent will have a finite memory and a limited time to act.
So, if we observe the scenario broadly, the environment is the problem, and the agent is the solution.
Important Terminologies
- Perception: Whatever agents observe in the environment.
- Actuators: A mechanism that puts something into action.
- Effectors: Agent’s elements that become active.
Types of Agents
Simple Reflex Agents
The reflex agent acts on the current percept and ignores the percept history. Percept history includes all that the agent perceived to date. Here the agent responds to the percept following the condition-action rule. The condition-action rule maps a particular condition to a particular action which means if a particular condition occurs, then the corresponding action must be taken, or else not.
The reflex agent can function successfully if the environment is fully observable. It is difficult for reflex agents to function in a partially observable environment.
Model-Based Agents
The model-based agent considers percept history, which helps it to track the situation. Thus model-based agent is able to perceive a partially observable environment.
As model-based takes percept history into consideration, which gives it a better comprehensive view of the environment. Thus, the agent knows how the environment has evolved based on which it performs the action.
Goal Based Agents
Each time knowing the current state of the environment is not enough for the agent to decide what action it must take. In this situation, if the agent knows about its goals, it will enable the agent to take action that reduces its distance from its goal. To achieve its goal, the agent has to consider all the possible actions and chose one which reaches the goal state. Considering the sequence of possible actions for different scenarios requires searching and planning makes the agent proactive.
Utility-Based Agents
Along with the features of goal-based agents, this agent offers an extra utility measurement. This measurement allows agents to rate each possible action based on their desired outcome and select the action that maximizes the result. Thus, the action taken by the utility-based agents not only focuses on achieving the goal but determines the best way to achieve it.
It is useful when there are multiple alternatives to achieve the goal, and the agent has to choose the action that would lead to the best outcome.
Learning Agents
The learning agent is able to learn from its perception history or the one possessing learning capabilities. This agent uses an extra learning element to improvise itself and become more knowledgeable about the environment.
Initially, it starts acting with the basic knowledge, and the learning element keeps on using feedback to improvise the performance elements to make more knowledgeable decisions.
What is Environment?
The environment is an element that influences the agent’s behaviour. However, it can itself get influenced by the agent’s actions.
Types of Environments in AI
Fully Observable vs Partially Observable
If the agent’s sensors can sense or access the state of the environment at any point, we declare it to be fully observable—for example, in games like chess and checkers.
If the agent can only determine the partial state of the environment, then we refer to that environment as partially observable. For example, you are driving a car in traffic.
Deterministic vs Stochastic
When the outcome of the agent’s action is completely predictable and can be precisely determined, we refer to this environment as a deterministic environment—for example, a mathematical equation.
However, when the outcome of an agent’s action can not be predicted and is uncertain, we refer to such an environment as a stochastic environment—for example, a game such as Carrom.
Competitive vs Collaborative
When multiple agents compete against each other to achieve a conflicting goal where the success of one agent is directly linked to another agent’s failure, we refer to such an environment as competitive—for example, the environment of a game like chess.
However, a collaborative environment is an environment where multiple agents work together to achieve a common goal, and each agent’s success depends on the group of agents.
Single-agent vs multi-agent
In a single-agent environment, only one agent interacts with the environment to achieve its goal. For example, in the puzzle, a single agent interacts with the environment to find a path to its goal state.
In a multi-agent environment, multiple agents interact with each other and with the environment to achieve their individual or collective goals—for example, a game including multiple players.
Static vs Dynamic
The environment that does change over time is the static environment, i.e., where the state of the environment is not affected by the agent’s action—for example, mathematical problems or logic puzzles.
The environment that changes with the agent’s action is dynamic—for example, video games or robotic applications.
Discrete vs Continuous
The environment in which the states of the environment and actions of the agent are finite and discrete is discrete—for example, the game of chess.
The environment in which the states of the environment and actions of the agent are infinite is continuous—for example, robotics or a control system.
Episodic vs Sequential
The agent’s actions do not affect the future states of the environment, but it can maximize the immediate reward at each episode by implementing some optimal policy.
In the sequential environment, the agent’s current action affects the environment’s future state—for example, video games.
Known vs Unknown
The environment where an agent can sense the complete state, their transition and rewards structure is the known state—for example, games such as tic-tac-toe.
How Does it Work?
Agents receive the stimuli from the environment they are interacting with. Now the stimuli can be in the form of light, sound, mouse movement, words typed from a keyboard, a physical jerk, etc. After receiving the stimuli, the agent performs some action that can affect the environment. Actions can be in the form of displaying information on the screen, moving links or arms or legs, accelerating the wheel, applying breaks, etc.
Agent comprises a body and a controller. The agent perceives or observes its environment through its body. The controller acknowledges this percept from the body through sensors and sends commands to the body through actuators in the form of actions.
Conclusion
Understanding agents and environments in artificial intelligence is mandatory before developing any intelligent system. To design an intelligent agent for an intelligent system, it is very important first to understand what kind of agent you must build to perceive a particular environment.
Leave a Reply