This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
I've seen IDA* pseudocode like this on a number of Google searches, but it doesn't even implement A*, it implements an iterative depth-first search algorithm that completely loses the advantages of A*. Additionally, I've found IDA* in two different textbooks and talked to a University Ph.D about it, and their accounts of IDA* are nothing like this. 74.140.141.209 ( talk) 03:13, 17 September 2011 (UTC)
It's not clear to me what new_start_cost is doing. Its passed as an argument to the recursive call, but doesn't appear in the function signature. 137.186.44.205 ( talk) 22:54, 6 April 2013 (UTC)
Well, I do not understand Python..... -- 120.195.63.69 ( talk) 11:57, 14 December 2009 (UTC)
I think there is an error in the pseudocode on how the new threshold is calculated (next_cost_limit).
In A Parallel Implementation of Iterative-Deepening-A* it says
For the first iteration, this threshold is the cost (f-value) of the initial state. For each new iteration, the threshold used is the minimum of all node costs that exceeded the (previous) threshold in the preceding iteration.
The python pseudocode of the article doesn't enforce this minimum and the algorithm might fail on some graphs due to this. I would therefore say that the line
next_cost_limit = min(next_cost_limit, new_cost_limit)
should be changed to
if new_cost_limit > cost_limit and new_cost_limit < next_cost_limit next_cost_limit = new_cost_limit —Preceding unsigned comment added by 83.76.247.10 ( talk) 18:06, 17 December 2010 (UTC)
Unlike Dijkstra or A*, this algorithm is badly covered in the web search. The only relevant hit I found is http://intelligence.worldofcomputing.net/ai-search/iterative-deepening-a-star.html but it looks more like a blog post. Has it been properly published? Is it vital to put some serious reference. Audriusa ( talk) 08:07, 30 March 2011 (UTC)
IDA* is covered in ISBN 0-13-604259-7, so it is certainly a "real" algorithm. As for web resources, I'm not sure. Cheezmeister ( talk) 13:21, 10 April 2011 (UTC)
I can vouch that this algorithm works. I independently tried to implement IDA* in Python, based on the description in Matt Ginsberg's book Essentials of Artificial Intelligence. That description is murkier than the one here. It doesn't use structured programming. I was having trouble getting my implementation to work on larger examples (searching to depth 10 in a space of size 36!). I used this code to help me debug, almost using it in its entirety by the time I got it to work. I'm not sure about the comment Error on Threshold Calculation above, however.
Rmkeller ( talk) 00:29, 18 January 2013 (UTC)
Does the order of generating successors matter in the algorithm as presented here? Currently, the search is stops at the first encounter with a solution, but I don't think that guarantees optimality unless successors(node)
returns nodes succ
in increasing order of cost(node, succ)
.
QVVERTYVS (
hm?) 16:03, 12 January 2015 (UTC)
successors(node) node expanding function, expand nodes ordered by g + h(node)
The above is the description of the successors. The discussion earlier in 2014 concluded that the successors have to be considered in an increasing ordering of f-values, but I do not think this is true. The above line may be a remnant of that discussion.
The reason why the first encountered goal state is the one with the lowest cost path is that when goal node n is found, the threshold on the preceding round was set to f(n). That's why higher cost paths to goal states will not be considered, and that's why the order in which the successors are generated does not matter.
The (incorrect) argument for having to order the successors would seem to be as follows. Consider starting state A, with two successors B and C, which both are goal states. Let h(A) = 1 and the cost of A->B is 2 and the cost of A->C is 3. If we just went through the successors and detected a goal state as soon as we saw one, then of course a bad ordering of the successors would lead to the more expensive goal state C. But the first time B and C are encountered we do not check whether they are goal states. Instead, their f-values f(B)=2 and f(C)=3 are used in computing the new threshold, which will be 2. On the next iteration any visit to C will not check whether C is a goal state because f(C) is above the threshold. But B will be detected as a goal state as it should.
That's why the misleading "expand nodes ordered by g + h(node)" should be removed, as it plays no role in the optimality of the algorithm. — Preceding unsigned comment added by 84.253.238.230 ( talk) 08:32, 27 January 2021 (UTC)
In the routine "ida_star" of the pseudocode, bound is set to "h(root)". But "h(root)" is a lower bound for the cost. Shouldn't it be an upper bound? -- Been-jamming ( talk) 19:06, 19 December 2021 (UTC)