<< Back to Map Development Forum   Search

Posts 21 - 37 of 37   <<Prev   1  2  
Generate your connections automatically, too!: 3/11/2012 12:59:08


Matma Rex 
Level 12
Report

E88B3EC, thanks for report, I fixed the connection in map version 1.0.1 (currently pending public). I wonder why it didn't work, but I don't have time to investigate now.

Generate your connections automatically, too!: 3/22/2012 22:01:34


Nate
Level 19
Report

I tried to use the tool which worked great until i got to the upload part and the it won't upload is says It didn't work and to report the error any ideas?

Generate your connections automatically, too!: 3/29/2012 16:57:17


Matma Rex 
Level 12
Report

Uh, try reporting the error. Did you take a screenshot? What did it sey? (Apart from mthat it didn't work.) I can't fix it if I don't know which part fails.

Generate your connections automatically, too!: 3/29/2012 18:53:30


Nate
Level 19
Report

okay here is what itt says word for word

Starting up, be patient...

This tool will allow you to automatically create
connections on your map.

It will have to ask you for your e-mail and password. This data
is not remembered or used for any malicious purposes,
it's simply required to confirm that it's your map
and you have the right to edit it.

It will also need the map's id. You can get it
by clicking "Get Link for Sharing",
the id is the number displayed at the end of public link.

Enter the name of file to work on:
> C:\Users\Living Room\Documents\Nate's Files\Warlight Maps\Map 5.svg

And now the map's and your data: [warning - your password will be visible]
Map id: North America (Medium)
E-mail: *********
Password:
***

Working... this may take some time. Go have a cup of tea or something.
This map appears to have 408 territories.
Loading data...
Press enter to exit.|oooooooooooooooooooooooooooooooooooooooo | ETA: 00:00:00

It says a whole lot of other stuff as well but i can't copy it fast enough before the program closes. However it used to get to the part where it uploads and then it says ** it didn't work there might be a problem with your connection please report it. Can you help please

Generate your connections automatically, too!: 3/29/2012 19:13:09


Moros 
Level 50
Report

It closes as soon as you press ENTER.

Generate your connections automatically, too!: 3/29/2012 19:17:28


Nate
Level 19
Report

I know that but i go back to check my map and it still has connections missing here is the map you should recognize it click here
Ex. the Central Mountains of Virginia and Southwestern Virgina Mountains are still not connected even though i used the tool.

Generate your connections automatically, too!: 3/29/2012 22:02:41


Matma Rex 
Level 12
Report

Oh, but it did add some connections? Then it worked, mostly. It's not perfect, unfortunately, and you might've hit a corner case - just add the connection yourself.

Generate your connections automatically, too!: 3/29/2012 22:08:19


Perrin3088 
Level 49
Report

Matma, it's a similar issue that E88 reported earlier...

Generate your connections automatically, too!: 3/31/2012 16:48:41

[16] Jasper 
Level 52
Report

This program running for several hours was something that really amazed me. As such, I decided to take a look at your source code and see how you did this. Though I don't really know ruby, I was able to make up a lot out of what I saw. I hope you don't mind me taking a moment to post my findings here.

First off, let me say that the way you reduced the problem to collision detection of bounding boxes. It is a very sound strategy and is likely to generate rather good results.

The first thing that stood out in my opinion was the fact that you are lazily generating the bounding boxes, even though you know up front that you are going to need them for each and every territory. Though this does work, just calculating the bounding boxes first removes a point where you can make mistakes (the laziness). I should add that my lack of knowledge when it comes to ruby made sure that I didn't really understand how you were doing the laziness, but I got the intention.

The other part is the actual collision detection. It is a very simplistic algorithm running in O(n^2). Because collision detection is so important in things like games, there are lots of efficient algorithms for this, including some that are still rather simple and can be implemented easily (especially when you can make some assumptions about the input, like you can with warlight maps). However, it is not entirely clear if making a more efficient solution here is actually going to improve your solution (especially since n < 3500). However, with the change suggested above, where you make the determining of the bounding boxes static instead of dynamic, it will be possible to have a progress bar for each of the two, which will make it easy to test which of the two is your bottle neck in real world examples.

I hope you can do something with my comments, but I won't be offended if you don't. I just thought I would share my findings with you.

Generate your connections automatically, too!: 3/31/2012 19:38:59

RvW 
Level 54
Report

Didn't study the details, but my gut instinct says that going from O(n^2) to O(n log n) will very likely be faster, even for small n.
Remember, even if it's ten times as slow for n = 5, it will still be near-instantaneous, whereas reducing the running time by 50% for n = 100 or n = 1000 will save you hours, which will be very "noticeable". ;)

Generate your connections automatically, too!: 3/31/2012 20:42:11


Matma Rex 
Level 12
Report

@Jasper and @RvW, thank for the input.

Jasper:

This was just a weekend project and I didn't put much thought into the algorithms.

I agree that "lazy-computing" of bounding boxes is unnecessary, I did it this way partially to make the code calling it look better ;) and partially as a programming exercise.

Employing some more clever algorithm to calculate collisions would certainly help, but I have no idea how to do this. This is not really an intersection detection in its simplest form; the code allows for path to be within 5px of each other, so to use intersection detection, you'd have to first enlarge one shape by 5px in all directions, which is a tough problem all by itself. And whatever you do, I think you still have to test every pair of territories - maybe you could somehow reduce the number of each's points to compare, but I have no idea how would you do that.

However, I think that the real bottleneck here is the way bezier curves are handled. Proper bezier polycurve intersection detection is, as far as I was able to find, the kind of thing people still write PhD dissertations about. This being a weekend project, I decide to just convert every bezier curve segment into up to 10 straight lines (in this bit of code, the comment on L88 means "we calculate up to 10 points in-between, less for paths shorter than 50px, 0 for paths shorter than 5px"), and this tenfold increase in points to check hurts.

Also, last but not least - Ruby, while really neat to write and quite readable even if you don't know it, is slow as balls. And I mean really slow. I think it's the slowest out of all "serious" languages, much slower than all other interpreted ones, too (far behind Python, a little behind PHP and Perl too).

If you have any ideas how to make it better, help would be greatly appreciated :)

Generate your connections automatically, too!: 4/3/2012 16:03:03

[16] Jasper 
Level 52
Report

@RvW
That isn't so much the question. What I was talking about was the fact that because the n is known to be low, it is much easier for the parts we are pretending to be O(n) (though they actually are something more like O(n*m) where m is the average "complexity" of your territories) to be the actual bottleneck. If this is the case, it doesn't noticeably help to get better performance out of the O(n^2) part.

@Matma Rex:
Now that you mention it, it looks like you are indeed doing (semi-)precise collision detection, I thought it was just bounding box collision detection. When I said the simplest form of collision detection, I was also referring to the bounding box issue, where the extra distance is trivial.

I am not sure whether the bottleneck can actually be the precise matching. Considering that so much of the precise matching is prevented by checking the bounding boxes, the problem could just as well as be in the checking of the bounding boxes. I would be interested in finding out just where the bottleneck is. You'd have to make the change to non-dynamic determining of the bounding boxes to filter out the possibility it's in that (though I have absolutely no idea how you are determining the bounding boxes, so for all I know it might be crazy to even think that is the bottleneck). I suppose that the easiest way to find whether the real checking is the bottleneck would be to comment it out and consider something a hit if the bounding boxes collide and then measure he performance gain.

Finally, the assumption that you would always have to check each bounding box against each other one is false. I once had a very interesting lecture about this. There, they discussed a simple algorithm that had better performance for collision checking. (Let me mention that I am working off the top of my head here, so I may just be remembering some of details incorrectly).
Basically, the started off with dividing the area in four equal squares by putting a line through the middle both horizontally and vertically. Now you'd have four collections - one for each square, and each bounding box only belongs to the collection of the area it is in (with squares cut in half ending up in multiple collections). You only need to check for collisions in each collection to find all the collisions. (If you have some knowledge of recursive algorithms, you may already understand where this is going.)
From there, it continued by further cutting up the areas, quickly coming to the conclusion that only those areas that have above a certain number of boxes actually benefit from being cut up further, so we moved to unevenly cutting up areas.
Next up, we had to make sure this was an efficient algorithm, even though the worst case (with all the objects in a corner) was still plaguing us. As such, we changed from cutting the areas in the middle to cutting them at some smart point. If I remember correctly, this was also where we started cutting in only one dimension at a time.

That is a very rough sketch of the algorithm, but it delivers a better result than brute forcing it in O(n^2). I believe I might still have a recorded version of the lecture on my laptop hard disk (though it is still in my laptop, which (mostly) died). (If the lecture was of that course, that is. It may just as well have been with some other course, I am not entirely sure anymore.) I am also not sure if it was in English or in Dutch. But, if you are interested in it, I think I can have a look if I can still get at it.

Last but not least, I may need to take another good look at this part of the script to be sure, but to me it looks like you are only checking how close two points are. That means that if you are using straight lines and the corners happen to be nowhere near one another, the algorithm will fail to realize that they are close, even if they are there where there are no corners. I think this shouldn't be too hard to fix.

Generate your connections automatically, too!: 4/5/2012 12:23:33


Matma Rex 
Level 12
Report

Last but not least, I may need to take another good look at this part of the script to be sure, but to me it looks like you are only checking how close two points are. That means that if you are using straight lines and the corners happen to be nowhere near one another, the algorithm will fail to realize that they are close, even if they are there where there are no corners.

Uh. That's a very good point, and true. I have no idea how could I have been so stupid as to not notice this. It's amazing the code actually worked!


I tried benchmarking and profiling the code a little.

First of all, loading the map and computing intermediate points on bezier curve is fast. On Bulgaria map, it takes approx. 5 seconds.


Checking all bounding boxes against each other using the current method took just a few second on my Ireland Small map (I just commented out the line checking polygons' points against each other), while full check on the same map takes a few minutes; on Bulgaria, it took about a minute, while full processing takes a few hours (as I said in first post here). Profiling revealed that actually a lot of this time (~50%) is spent on the lazy computation - this is kinda surprising to me. Anyway, this is not the bottleneck; worth fixing, but we have other problems ;) While your algorithm seems to make sense, it looks like there are usually not enough polygons for it to speed the code up much.


Profiling also revealed that about 40% of the connection checking time is spent computing the distances between points using #distance_sq method. Since this method can't be made any faster or simpler, we'll have to, well, compute less distances :P

I have found something quite interesting: https://en.wikipedia.org/wiki/Gilbert–Johnson–Keerthi_distance_algorithm and an implementation in D: http://code.google.com/p/gjkd/ This looks like exactly what I need here, but the thing seems quite complicated. Maybe I'll get around to implementing this, someday.

Generate your connections automatically, too!: 4/5/2012 17:28:06

[16] Jasper 
Level 52
Report

Profiling revealed that actually a lot of this time (~50%) is spent on the lazy computation - this is kinda surprising to me.

That is interesting. Could you place a counter in the lazy computation function to make sure it isn't executed more often than the number of territories?
As you say, though, it's not the bottleneck so the priority is low.

While your algorithm seems to make sense, it looks like there are usually not enough polygons for it to speed the code up much.

The key to the algorithm was in the last part (the part that I don't really remember :P). By choosing the point to cut the area up in a "smart" way, one was able to make a guarantee that no more than a * lg (n) (where a is a constant) cuts are needed, while also making sure that every polygon would then be compared to a constant number of other polygons or less, meaning that the algorithm would run in O(n * lg(n)) which is significantly better than the O(n^2) we had before. This means that probably you will be getting gains even on low polygon counts. Unfortunately, though it probably still wouldn't help much considering this isn't what we are spending most of our time on.

Profiling also revealed that about 40% of the connection checking time is spent computing the distances between points using #distance_sq method. Since this method can't be made any faster or simpler, we'll have to, well, compute less distances :P

First off, I would suggest actually calculating the distance between the line segments.

Defining distLP(line, point) as the distance between the line and the point and distPP(point, point) as the distance between two points, we can define distLsP for the distance between a line segment and a point as below:

distLsP(lineSegment, point)
{
    linedist = distLP(lineSegment, point)
    endpoint1dist = distPP(lineSegment.endpoint1, point)
    endpoint2dist = distPP(lineSegment.endpoint2, point)
    linelength = distPP(lineSegment.endpoint1, lineSegment.endpoint2)

    if (endpoint1dist ^ 2 &gt; linelength ^ 2 + linedist ^ 2 ||
         endpoint2dist ^ 2 &gt; linelength ^ 2 + linedist ^ 2)
    {
        return min(endpoint1dist, endpoint2dist);
    }
    else
    {
        return linedist;
    }
}

(Note that the &gt; in the code above should of course be a ">". The problem is that the markdown implementation used on this website is conflicting with the forum software used here and as such, it is being displayed incorrectly.)

using this function, one can define the distance between two line segments:

distLsLs(lineSegment1, lineSegment2)
{
    return min(distLsP(lineSegment1, lineSegment2.endpoint1),
               distLsP(lineSegment1, lineSegment2.endpoint2),
               distLsP(lineSegment2, lineSegment1.endpoint1),
               distLsP(lineSegment2, lineSegment1.endpoint2))
}

Unfortunately, we are adding distance calculations (linelength and linedist) instead of removing them. (Note that here lazy computation actually pays off. At the moment, we are using each of the distances between two points multiple times, and in a moment, we will be removing a number of these checks, making sure that we don't need all of the distances between points anymore).

However, while this was possible before, it becomes a lot more interesting now to remove some of these checks. We'll just use more bounding boxes. Currently, we are only using bounding boxes on the level of territories. We can also use them here at the level of line segments. While this wouldn't make much sense normally, it does here because we are allowing a certain distance between the shapes here. I think we should really be able to make interesting gains here, even though we ate going for more interesting matching here.

I have found something quite interesting: https://en.wikipedia.org/wiki/Gilbert–Johnson–Keerthi_distance_algorithm and an implementation in D: http://code.google.com/p/gjkd/ This looks like exactly what I need here, but the thing seems quite complicated. Maybe I'll get around to implementing this, someday.

That is indeed looking quite interesting and might indeed be a much more efficient way to go about this. At the same time, it does also look like it indeed is quite complicated...

Generate your connections automatically, too!: 4/5/2012 20:21:57

RvW 
Level 54
Report

Jasper, the algorithm you describe sounds very familiar; do you remember what the thing is called; that will make it easy to Google all the other details and with very high likelihood a sample implementation.

You forgot to include the code for distLP() (probably easy to Google, but hey, "a good programmer is lazy" ;) ).

Generate your connections automatically, too!: 4/16/2012 11:43:48

[16] Jasper 
Level 52
Report

Sorry for taking so long to respond!

RvW: As for the distLP() issue, I didn't forget it. I just figured that it was a trivial piece of Mathematics, so I didn't write it down. In other words, I was being a good (=lazy) programmer :P

As for the name of the algorithm, I didn't remember the name, but I looked it up. Instead of accessing my desktop's hard disk, though, I took the easier path of looking up the website from the course and seeing if they still have the year I followed it archived, and as it happens, they do.

I didn't remember things quite correctly, as it wasn't so much that it was about collision detection, but about ray tracing instead, which means that calculating (the first) intersection point of a line and any of the objects. As such. the strategy was somewhat different, but in the way I wrote down, it is quite doable to use the algorithm for collision detection of a large number of objects.

Dividing the space evenly each time will give you an "Octree" in 3D or a "Quadtree" in 2D. Then there is the uneven dividing of the space, which will give you a "Binary Space Partitioning Tree" or BSP Tree.

This is the course website: http://www.cs.uu.nl/docs/vakken/gr/2008/gr_lectures.html
It contains slides of the lectures about the topic as well as videos of the teacher talking with his slides and the annotations he is making live as the video. You'd be looking for Lecture 7 part 2. If you watch the videos, I suggest you speed it up at least by 33%.

Generate your connections automatically, too!: 2/14/2013 22:42:22


Stag 
Level 57
Report
It appears that the executable file is no longer available. Could someone please re post it?
Posts 21 - 37 of 37   <<Prev   1  2