Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest PathWe study the sample complexity of learning an $ε$-optimal policy in
the Stochastic Shortest Path (SSP) problem. We first derive sample complexity
bounds when the learner has access to a generative model. We show that there
exists a worst-case SSP instance with $S$ states, $A$ actions, minimum cost
$c_{\min}$, and maximum expected cost of the optimal policy over all states
$B_{\star}$, where any algorithm requires at least
$Ω(SAB_{\star}^3/(c_{\min}ε^2))$ samples to return an
$ε$-optimal policy with high probability. Surprisingly, this implies
that whenever $c_{\min}=0$ an SSP problem may not be learnable, thus revealing
that learning in SSPs is strictly harder than in the finite-horizon and
discounted settings. We complement this result with lower bounds when prior
knowledge of the hitting time of the optimal policy is available and when we
restrict optimality by competing against policies with bounded hitting time.
Finally, we design an algorithm with matching upper bounds in these cases. This
settles the sample complexity of learning $ε$-optimal polices in SSP
with generative models.
We also initiate the study of learning $ε$-optimal policies without
access to a generative model (i.e., the so-called best-policy identification
problem), and show that sample-efficient learning is impossible in general. On
the other hand, efficient learning can be made possible if we assume the agent
can directly reach the goal state from any state by paying a fixed cost. We
then establish the first upper and lower bounds under this assumption.
Finally, using similar analytic tools, we prove that horizon-free regret is
impossible in SSPs under general costs, resolving an open problem in
(Tarbouriech et al., 2021c).
arxiv.org