New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add: [Script] Native priority queue; useful e.g. for pathfinders. #8091
Conversation
Back in the day, and I am not saying that should influence this PR, this merely for context, we explicitly opted out from doing this, as it would open up for many different type of queues, sorting algorithms, etc. We didn't want this in the codebase as it would open up for potential tons of libraries, as they would be faster than squirrel. This is partially why AI libraries were created. Again, not saying this is a bad idea, just wanted to add context. Maybe good to think about what library is accepted and when it goes "too far" :) |
b3865f5
to
dfbc0ba
Compare
Sure, but on the other hand, pathfinding is one of the core things many AIs do. And it is not a good user experience when an AI appears to not do anything for many game years, thus any improvement in this area helps. I did toy with the idea of making the whole generic A* native, but as the algorithm spends most of the time calling user-defined callback functions, the speed-up is doubtful due to the switching overhead. I might still make some test to measure. |
4b26a73
to
6820e3c
Compare
Not sure the memory tracking would be worth it. It costs some real-time performance and e.g. ScriptList doesn't track its internal memory use either. |
Yeah it's probably not worth it with memory tracking. It's also probably not going to be significant usage for most cases....maybe? I have no idea how deep these queues get in practice. |
It's not like memory isn't tracked at all, the objects stored in the queue are itself tracked by the squirrel VM. |
6820e3c
to
dfbc0ba
Compare
Memory tracking has vanished in to thin air 😁 |
It's about ~3% faster than the implementation in squirrel code, using AIList. But the major advantage is that it has better memory usage control. |
I'm tempted to agree with @TrueBrain here. The only real use for this is implementing pathfinders, so... why not just expose the pathfinders? |
I've seen different A* implementations on several AIs, but everyone has the Queue part in common as far as I can tell. Even I, am using different A* code for my canal pathfinder. The generic one seems to lack a way to detect when a planned bridge crosses another planned bridge, for example. EDIT: Actually, now that I think of it, this detection could be done in the neighbours callback, it would require walking the path backwards to find where bridges were previously planned. |
@LordAro I wouldn't be against exposing some kind of pathfinder in principal. But if you want to preserve the uniqueness of each specific AI/GS, the pathfinder will do not much besides call back to squirrel. |
As a (former?) AI and GS author, having a pathfinder built into the NoAI/NoGS engine would be a wonderful Quality of Life improvement. Writing an AI was one of the first "programming" thing I did, and so there was a lot to learn, but understanding path-finding was by far the hardest thing to figure out, in part because when you're an outsider everything either seems entirely academic or akin to magic. Having something that you could call that just works out-of-the-box would have been wonderful. I did eventually get the pathfinder part working for roads, tried my hand at a ship pathfinder (that never quite worked like I wanted...), and never felt brave enough to create a rail pathfinder (in part because I knew I'd want to to build double parallel tracks). In some ways, my AI (WmDOT) never progressed past the point of being a pathfinder: it builds road between towns, and tries to build some ships, but that it. Just my two bits. |
I found trAIns uses a very customized priority queue where the item class itself has the priority. It makes Insert to require only 1 parameter, the item. Makes it the first AI that wouldn't benefit from a native priority queue. |
There are road/rail squirrel pathfinder libraries already. If those are hard to use, we should improve the documentation for those libraries first and maybe add some good examples. The most interesting part of a pathfinder and where AIs can differentiate themselves is in the scoring function and making tradeoffs for estimating costs. This part should not be hardcoded in OpenTTD itself. The A* library could probably be replaced by a native version, but it'd take some profiling (not sure how doable that is for the squirrel code) to figure out if that actually leads to an improvement. A native priority queue sounds fine to me, since it's going to be necesary is basically every pathfinder, and there are not really any user-visible choice you can make as an AI writer (you can only optimize for speed). With this I mostly want to echo/expand on TrueBrain's first comment, I haven't actually looked at changes themselves yet. |
The problem with a native pathfinder core is the same we already have with valuators: suspending a script from inside a C++ callback is not possible. |
@michicc I just found a way to make the "Native Heap" (now renamed to Sorted List) AI Library to not require absurd amounts of memory! It's even able to use the Exists function! In terms of performance it still requires about ~3% more ticks in comparison to the PR's implementation. |
dfbc0ba
to
8700094
Compare
if (inserted.second) { | ||
sq_addref(vm, &item); // Keep object alive. | ||
this->queue.emplace(priority, item); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As written this breaks assumptions that pathfinders are normally making: the expected behavior is that a queue can hold the same item multiple times. The pathfinder separately keeps track of a list of closed items, ie items it doesn't need to revisit. When it pops something from the queue that is already closed it'll ignore that.
It's expected that sometimes an item gets pushed multiple times with different priority values. The lowest priority value is supposed to win in that case, which is handled as per above.
This code blocks the 2nd insert, which means an item cannot be inserted again with a lower priority than the initial one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I've seen both variants for priority queues in practice, and at least the road pathfinder AI lib seems to never inserted an already included item again, but I'll remove the check if you think this better matches the current AI practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, on second though, unless I'm completely off, A* with an admissible heuristic should never be able to produce a lower cost for an already processed node.
Does any AI/lib implement something different than A*?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but a node is processed at the time it is popped from the queue, not on insert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but a node is processed at the time it is popped from the queue, not on insert.
I don't see how that contradicts @michicc's comment? Higher-cost duplicates of the same node never need to be processed at all, and with an admissible heuristic all duplicates should be higher-cost, so there's no reason to insert them.
On the other hand, using an inadmissible heuristic might be useful as a performance tradeoff - given OpenTTD's fairly wide-open maps, the results would probably look okay.
ScriptList::Valuate takes 5 squirrel ops per item in the list, so prevent an AI from taking much CPU time but not being suspended due to not taking squirrel ops. The operations done on the std::priority_queue are O(log N) it seems. As written, they take a constant number of squirrel ops, which means as far as an AI is concerned they're constant time. That makes any comparision against the squirrel implementation fundamentally flawed, since the results are going to vary significantly between tests done with a small number of items in the queue versus tests done with a large number of items in the queue. |
|
8700094
to
852d270
Compare
Updated to allow duplicate inserts, which allows to drop the separate set. To still allow efficient destruction, I switch to directly using the underlying std algorithms that back |
852d270
to
bcaa8d9
Compare
This is about 20% better than the FibonacciHeap implementation from 10 years ago (which was best at the time) in some random pathfinder tests I just did. I believe that matches @SamuXarick numbers. |
bcaa8d9
to
de9b84e
Compare
de9b84e
to
e3439a0
Compare
Took a bit faffing around to get the reference counting to not crash, but according to @SamuXarick is works and is faster than the Squirrel code.