a) preparer runs in 50% of the time
b) LocationHook is also much faster when the new bnd files are used, but mkgmap still works with the existing *.bnd files
c) bnd file size is a bit larger (~ 20%), but zip compressed size is much smaller (dunno why)
I hope I found all big errors, but there is still some work to do:
1) some error messages are only meaningfull for the programmer, but I kept them for now
2) some tools do not yet work with the new bnd files, e.g. BoundaryFile2Gpx reports wrong coverage. It is probably possible to adapt them, but I did not try yet.
3) The workoutBoundaryRelations() method is a good candidate for parallel execution, each thread could process one bnd file.
A few remarks:
1) As mentioned before, I kept the bnd format, but used a trick to save the position in the quadtree.
The boundaryid is padded with the "treepath" and a number which is incremented for each boundary in a given quadtree node, boundaryid=r1202073_332_3 means boundary data is OSM relation 1202073, and it should be added to "root->child->child->child.nodes"
2) The tag mkgmap:lies_in was replaced by mkgmap:intersects_with, format is equal
3) A special case: Boundaries that are referenced in mkgmap:intersects_with tags, but do not appear with
the treepath. We need the tags of those boundaries to be able to find the right iso code. These boundaries are added to the bnd file, but not with the original area info, just with the bbox of the original area. Without that trick, the bnd files would be much larger.
4) I kept a lot of code that allows to create gpx files for boundary areas that are somehow strange, e.g. extremely small triangles, long and thin spilkes etc. Probably most of that code will be removed later.
is it also possible to change the file organization?
The world bounds contains 17.152 single files.
I think this is very unhandy.
I don't care about the number of files, but it should be no problem to
change the program so that it groups
the files in subdirectories,e.g. bounds_3100000_1650000.bnd
can be moved to bounds_3100000/1650000.bnd
yes, probably this will be an improvement (maybe in performance but
definetly in handling). But this is (a bit) more complicated to implement.
So after having solved the other problems hopefully someone will invest
some time to add the handling for one zip file.