[关闭]
@cms42 2025-04-05T01:16:58.000000Z 字数 4915 阅读 8

Certainly, let's break down the concepts of state representation and spatial/non-spatial policies as described in the StarCraft II paper.

1. State Representation

In the context of this paper, "state representation" refers to how the game environment of StarCraft II is presented to the reinforcement learning agent. Instead of raw RGB pixels, the SC2LE environment provides the agent with "feature layers". These feature layers are designed to abstract away from the visual complexity of the 3D game and provide more structured information.

Here's a breakdown of the state representation:

2. Spatial Policy and Non-spatial Policy

The paper distinguishes between "spatial policies" and "non-spatial policies" based on the type of action they control. This distinction is crucial because StarCraft II involves both actions that target locations in the game world (spatial) and actions that don't (non-spatial).

In summary, the state representation in SC2LE is based on feature layers that provide a structured, abstracted view of the game world. The policy is then decomposed into non-spatial and spatial components to handle the different types of actions in StarCraft II, enabling the agent to make both strategic decisions (action type) and tactical decisions (where to act in the game world).

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注