Generalisation

classic Classic list List threaded Threaded
23 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Generalisation

Tomas Straupis
Hello

  In last weeklyOSM (http://www.weeklyosm.eu/archives/10227) there was
a very interesting point:

  "Arne Johannessen has published his diploma thesis (PDF) with the
title "Algorithms for automated generalization by combining polylines
in OpenStreetMap for specific special cases" and he also released the
corresponding Java source code."

  As OSM is mature enough to start generalisation (more than
"selection" operator), maybe there is some place where such topics (in
OSM context) are discussed in English?

  Also, maybe there are ideas to translate the thesis mentioned above
to English?

--
Tomas

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Frederik Ramm
Hi,

On 04/14/2018 11:18 AM, Tomas Straupis wrote:
>   As OSM is mature enough to start generalisation (more than
> "selection" operator), maybe there is some place where such topics (in
> OSM context) are discussed in English?

The most likely location for this to be discussed is probably within the
openstreetmap-carto developer community as they would benefit most from
such approaches. I don't follow their work closely though so couldn't
say if the issues have been discussed in the past.

I'm sure Arne himself would be happy to participate but I hear he's gone
on holiday after completing his thesis ;)

>   Also, maybe there are ideas to translate the thesis mentioned above
> to English?

I'm not aware of any plans, but I do know that Arne has quoted similar
work done by others, and there were many English-language works among
that, so perhaps if you skim through his literature list at the end of
the PDF you'll find interesting articles.

Best
Frederik

--
Frederik Ramm  ##  eMail [hidden email]  ##  N49°00'09" E008°23'33"

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Tomas Straupis
Hello

2018-04-15 22:54 GMT+03:00 Frederik Ramm wrote:
> The most likely location for this to be discussed is probably within the
> openstreetmap-carto developer community as they would benefit most from
> such approaches. I don't follow their work closely though so couldn't
> say if the issues have been discussed in the past.

  Good point. I'll watch openstreetmap-carto.

>>   Also, maybe there are ideas to translate the thesis mentioned above
>> to English?
> I'm not aware of any plans, but I do know that Arne has quoted similar
> work done by others, and there were many English-language works among
> that, so perhaps if you skim through his literature list at the end of
> the PDF you'll find interesting articles.

  I'm already buried in pdf's from google scholar :-) You start with
one paper, then open references, then references in references and
here you are the "wikipedia effect".

  The point is that Arne's work looks (from what I did understand) at
generalising openstreetmap data, while most papers look at general
data. This is important because OSM tagging does help a lot. Some
generalisation can be done much easier with OSM data because of
additional tags and you can add additional tags describing some real
properties (not designed for the sole purpose of generalisation) of an
objects which could help generalisation. For example with road
generalisation, a lot is written on intersection generalisation.
Having *_link ways or ref tags in OSM helps this effort a lot. Railway
network also has different subtags like main/siding/spur etc. which
also helps a lot. So while generalisation is generalisation anywhere,
but generalisation of OSM data does have some important and useful
differences/advantages.

  Another interesting thing is recalculating/updating generalised
data. "OSM way" is to update very often. So it is important to be able
to recalculate generalised data only on impacted part and it is not as
trivial as calculating "dirty tiles".

  I do not know how much of that is covered in Arne's thesis. Will try
to read/translate German version if there are no plans for
translation.

  Thank you!

--
Tomas

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Daniel Koć
In reply to this post by Frederik Ramm
W dniu 15.04.2018 o 21:54, Frederik Ramm pisze:
> On 04/14/2018 11:18 AM, Tomas Straupis wrote:
>>   As OSM is mature enough to start generalisation (more than
>> "selection" operator), maybe there is some place where such topics (in
>> OSM context) are discussed in English?
> The most likely location for this to be discussed is probably within the
> openstreetmap-carto developer community as they would benefit most from
> such approaches. I don't follow their work closely though so couldn't
> say if the issues have been discussed in the past.

I remember that we in the osm-carto community were mentioning about
generalizations for some time, but it is just loosely scattered here and
there and we've never focused on this. I've heard quite interesting
things about this technique in a Paul Norman's talk from last year (the
exact section is here: https://youtu.be/g2HYYADa8XI?t=1h36s ).

I think it's the best to discuss this topic on the forum, because it's
much broader than any single ticket should contain, and it might be
interesting for rendering other styles too:

https://forum.openstreetmap.org/viewforum.php?id=100

--
"My method is uncertain/ It's a mess but it's working" [F. Apple]



_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

dieterdreist
AFAIK, in the osm-carto style there is no generalization on live data for performance reasons (because of continuous data updates via minutely diffs). There are some precomputed / extracted data files though, some of which contain generalized (simplified) data. These are all "external" sources:

  1. world_bnd_m.shp, places.shp, world_boundaries_m.shp
  2. simplified_land_polygons.shp (updated daily)
  3. ne_110m_admin_0_boundary_lines_land.shp
  4. land_polygons.shp (updated daily)
  5. icesheet_polygons.shp
  6. icesheet_outlines.shp


still AFAIK, 1 and 3 are currently still from natural earth data, the rest is from OSM, 2 is a simplfied version of 4, both come from openstreetmapdata.org (Christoph and Jochen). Of these, 1-3 are containing generalized data.

Cheers,
Martin


_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

dieterdreist


2018-04-16 10:34 GMT+02:00 Martin Koppenhoefer <[hidden email]>:
still AFAIK, 1 and 3 are currently still from natural earth data, the rest is from OSM, 2 is a simplfied version of 4, both come from openstreetmapdata.org (Christoph and Jochen). Of these, 1-3 are containing generalized data.


according to Paul Norman in the video recording, also 5 and 6 are generalized.
Another aspect is filtering: osm-carto removes features when they would be very small (pixels at a given zoom level) and lead to "noise".

Cheers,
Martin

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Tomas Straupis
In reply to this post by dieterdreist
2018-04-16 11:34 GMT+03:00 Martin Koppenhoefer wrote:
> There are some precomputed / extracted data files though, some of which
> contain generalized (simplified) data. These are all "external" sources:
> <...>

  Ok, so this is natural polygon generalisation. Looking at
https://github.com/imagico/coastline_gen the method used is to
rasterise, process and then vectorise back.
  I wonder if that is better/faster than full vector way:
st_clusterwithin, st_union, st_buffer(positiveN),
st_buffer(negativeN+M), st_buffer(positiveM) with a seasoning of
st_simplifypresevetopology according to taste.

> Another aspect is filtering: osm-carto removes features when they would be
> very small (pixels at a given zoom level) and lead to "noise".

  Filtering (selection) is technically also a generalisation.
  But you need to group and probably amalgamate them before deciding
that it is "too small". For example if we have a lot of small patches
of forest close together (say 1000 patches of 10x10m with distance
between patches of 1m) you would want to amalgamate them to one large
forest, not to get rid of them all.

  Ways and especially buildings are the most interesting (difficult) part :-)

P.S. GRASS claims to be doing displacement and way selection
(https://grasswiki.osgeo.org/wiki/V.generalize_tutorial)

--
Tomas

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Christoph Hormann-2
In reply to this post by dieterdreist
On Monday 16 April 2018, Martin Koppenhoefer wrote:
> AFAIK, in the osm-carto style there is no generalization on live data
> for performance reasons (because of continuous data updates via
> minutely diffs). There are some precomputed / extracted data files
> though, some of which contain generalized (simplified) data. These
> are all "external" sources:

I think this requires some clarification.

There are different definitions of what kind of operations and processes
you call "generalization".  But typically selection is considered a
generalization operation meaning that most of what is shown at all but
the highest zoom levels is shown with some level of generalization.  
Most selection is rather primitive - based on fixed zoom level
thresholds, the dreaded way_area filtering etc.  There are very limited
attempts at some additional sophistication, like for populated place
rendering at low to mid zoom levels.

When talking about explicit geometric generalization while specifically
exclusing lossy vector data compression (which is widely
mischaracterized as generalization) the Natural Earth boundaries at
z=1-3 is currently the only case.

Lossy vector data compression in addition is currently/historically used
in the following places:

* Coastlines at z8-9 - the visual impact of this was rather small but
visible in some areas (in particular with small islands) if you kept an
eye on it at the transit from z9 to z10.  This has been the case for a
long time but is essentially superseded by:
* Coastlines at z0-9 since
https://github.com/gravitystorm/openstreetmap-carto/pull/3065
* The administrative boundaries at z4+ since
https://github.com/gravitystorm/openstreetmap-carto/pull/3103

You need to keep in mind that both geometric generalization and lossy
vector data compression operations are largely incompatible with the
current goals of OSM-Carto:

https://github.com/gravitystorm/openstreetmap-carto/blob/master/CARTOGRAPHY.md

--
Christoph Hormann
http://www.imagico.de/

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Marco Boeringa
In reply to this post by Tomas Straupis

No, buildings are not the most interesting. I once generalized all buildings in Denmark. It only reduced the storage by maybe 5%, at the high cost of heavily distorting a large number of them. Most buildings in OSM are in fact already in their most generalized state: just 4 nodes. Unless you think triangles is a suitable representation ;-). Besides, buildings are only shown at high zoom, while generalization is most needed and beneficial at low zoom. Lastly, most vector generalization algorithms are primarily designed and effective for rather smooth and node rich data, like a stream-digitized feature, neither of which relates to square buildings. Hence  i consider generalizing buildings largely senseless.

Van: Tomas Straupis
Verstuurd: maandag 16 april 11:48
Onderwerp: Re: [OSM-dev] Generalisation
Aan: Openstreetmap Dev list


2018-04-16 11:34 GMT+03:00 Martin Koppenhoefer wrote: > There are some precomputed / extracted data files though, some of which > contain generalized (simplified) data. These are all "external" sources: > Ok, so this is natural polygon generalisation. Looking at https://github.com/imagico/coastline_gen the method used is to rasterise, process and then vectorise back. I wonder if that is better/faster than full vector way: st_clusterwithin, st_union, st_buffer(positiveN), st_buffer(negativeN+M), st_buffer(positiveM) with a seasoning of st_simplifypresevetopology according to taste. > Another aspect is filtering: osm-carto removes features when they would be > very small (pixels at a given zoom level) and lead to "noise". Filtering (selection) is technically also a generalisation. But you need to group and probably amalgamate them before deciding that it is "too small". For example if we have a lot of small patches of forest close together (say 1000 patches of 10x10m with distance between patches of 1m) you would want to amalgamate them to one large forest, not to get rid of them all. Ways and especially buildings are the most interesting (difficult) part :-) P.S. GRASS claims to be doing displacement and way selection (https://grasswiki.osgeo.org/wiki/V.generalize_tutorial) -- Tomas _______________________________________________ dev mailing list [hidden email] https://lists.openstreetmap.org/listinfo/dev


_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Tomas Straupis
2018-04-16 19:34 GMT+03:00 Marco Boeringa wrote:
> No, buildings are not the most interesting. I once generalized all buildings
> in Denmark. It only reduced the storage by maybe 5%, at the high cost of
> heavily distorting a large number of them. Most buildings in OSM are in fact
> already in their most generalized state: just 4 nodes. Unless you think
> triangles is a suitable representation ;-)

  Interesting, what algorithm did you use?

  I'm playing around in Vilnius which has urban houses, big block
houses, industrial zones and old town with lots of connected buildings
of very irregular shapes.
  In Vilnius there are 54267 buildings tagged with 366979 vertexes.
  Clustering them with distance of 5m gets 45810 objects (of course
with the same number of vertexes).
  Removing buildings with area < 100 and having neighbours in < 500
meters I'm left with 28974 buildings with 299224 vertexes.
  Simplification (amalgamating buildings in the cluster and trying to
remove edges < 20m) reduces the number of vertexes to 117108.
  So this is much more than 5%.
  There are still a lot of problems (no triangles:), but I do not
expect number of vertexes to rise considerably.

  Even "dumb" generalisation (st_buffer+- with join=mitter) reduces
vertex count by ~25%.

  Reducing storage/tile size is not the only/main purpose of generalisation.

> Besides, buildings are only shown at high zoom,
> while generalization is most needed and beneficial at
> low zoom. Lastly, most vector generalization algorithms are primarily
> designed and effective for rather smooth and node rich data, like a
> stream-digitized feature, neither of which relates to square buildings.
> Hence  i consider generalizing buildings largely senseless.

  I suspect you're talking about st_simplify(preservetopology) use in
vector tile generators. Which as mentioned earlier is only technically
a "generalisation".
  http://www.gitta.info/Generalisati/en/html/unit_GenProcedure.html

--
Tomas

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Daniel Koć
In reply to this post by Christoph Hormann-2
W dniu 16.04.2018 o 11:42, Christoph Hormann pisze:

> There are different definitions of what kind of operations and processes
> you call "generalization".  

Thanks for showing the examples what can be seen as generalization, this
is quite wide subject:

https://en.wikipedia.org/wiki/Cartographic_generalization

On osm-carto we also use types generalization (like meadow and grassland
shown in the same way), which is very different than the one in the
original message.

> You need to keep in mind that both geometric generalization and lossy
> vector data compression operations are largely incompatible with the
> current goals of OSM-Carto:
>
> https://github.com/gravitystorm/openstreetmap-carto/blob/master/CARTOGRAPHY.md

I don't think this is true. For example making borders simpler increases
clarity, while previous state was creating optical illusions.

--
"My method is uncertain/ It's a mess but it's working" [F. Apple]



_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

dieterdreist
In reply to this post by Marco Boeringa


2018-04-16 18:34 GMT+02:00 Marco Boeringa <[hidden email]>:

No, buildings are not the most interesting. I once generalized all buildings in Denmark. It only reduced the storage by maybe 5%, at the high cost of heavily distorting a large number of them. Most buildings in OSM are in fact already in their most generalized state: just 4 nodes. Unless you think triangles is a suitable representation ;-).


it really depends on the zoom levels (=detail you want) and building structure. If there is a closed building block there may be a lot of those 4-node-houses which all together could be generalized to one 4 node block. If there are scattered houses in a rural setting, you still can omit the smaller ones or make one bigger structure by combining several smaller ones. Many buildings also do have more than 4 nodes.

Cheers,
Martin

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Yves
In reply to this post by Daniel Koć



>
>> You need to keep in mind that both geometric generalization and lossy
>
>> vector data compression operations are largely incompatible with the
>> current goals of OSM-Carto:
>>
>>
>https://github.com/gravitystorm/openstreetmap-carto/blob/master/CARTOGRAPHY.md
>
>I don't think this is true. For example making borders simpler
>increases
>clarity, while previous state was creating optical illusions.
>
Also, generalizing buildings could for instance give something nice at low zoom instead of only landuse=residential. Nothing really contradictory to the described goal here.


Yves

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Tomas Straupis
SwissTopo maps are some of the best examples of generalisation (as
well as other cartography principles/techniques)
https://map.geo.admin.ch/

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Marco Boeringa
In reply to this post by Tomas Straupis
Hi Tomas,

The generalization I wrote about was just a crude basic generalization
of vector (building) data of OSM using a default tool of ESRI's ArcGIS.
The specific tool used (Simplify Polygon), has more advanced settings
than standard Douglas Peucker, but by itself does nothing really special
other than weeding out vertices / nodes. I just attempted to use it with
different tolerances to see what the results would be, and concluded the
resulting defects in building topology, were not worth the reduction in
file size.

Of course, if your city's buildings are far more detailed than the
average building in OSM, e.g. an import of official government data as
measured up to centimeter level using land surveying techniques, with
rather large vertex counts on average, I can imagine that even simple
techniques to generalize, may reduce vertex counts more than I achieved.
And also depends a lot on how much you personally tolerate artefacts...
(mine is low for buildings).

However, as to interesting stuff to read: our national Kadaster of the
Netherlands, is actually the very first and only national mapping
organization world wide, that has successfully managed to implement a
fully automated generalization work flow for generating 1:50k maps from
1:10k maps, including landuses, waterways, highways, but also
generalizing build-up areas and buildings. They used a range of
cartographic generalization tools from ArcGIS (that I didn't use...).

The results achieved by the Dutch Kadaster, closely mimic what Imagico
states as a more holistic approach to generalization, and are largely
truth to how cartographers traditionally manually generalized maps. In
fact, one of the key aspects of the workflow developed by the Dutch
Kadaster, was to mimic as closely as possible the inherent "rules" their
cartographers used and developed over decades or more than a century, to
"generalize" maps to smaller scales.

However, if you read the level of effort needed to achieve this (years
of development by a small team of employees / researchers, and a huge
tool chain build up), and the sheer processing power needed to do such a
sophisticated generalization, it is utterly clear you cannot do this in
real time. It is only worth the effort in organizations like the
national mapping agencies, where the ultimate gain of automatization to
fully replace manual conversion of topographic maps from one scale to
another, or to keep different workflows for different scale map series
(1:10k,1:25k,1:50k,1:100k,1:250k etc.) alive, far outweighs the effort
to develop such a generalization tool chain and workflow in the long run.

They now call this workflow "AUTOgen"

The Dutch Kadaster was actually awarded a prize by ESRI for this work.
See also this ArcNews bulletin (pages 19-21):
https://www.esri.com/~/media/Files/Pdfs/news/arcnews/winter1314/winter-2013-2014.pdf

Some links to this work by the Dutch Kadaster:
-
https://repository.tudelft.nl/islandora/object/uuid:12f0c152-958a-4688-a56d-5e30f7540a68/datastream/OBJ
-
https://www.kadaster.com/documents/33433/36597/An+overview+of+the+Dutch+approach+to+automatic+generalisation/6af54a07-3188-41db-81d2-fdc2e1d4094b
-
https://www.kadaster.com/documents/33433/36597/Feasibility+study+on+an+automated+generalisation/dbf664a7-160f-456d-9559-32263ef6793f
-
https://www.researchgate.net/publication/299455974_Automated_generalisation_in_production_at_Kadaster_NL

Links to English pages of Dutch Kadaster;
https://www.kadaster.com/international-consultancy
https://www.kadaster.com/automatic-generalisation

Other interesting information regarding buildings (LODs) from research
involving one the person also involved in the Kadaster work:
https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XXXVIII-4-C26/7/2012/isprsarchives-XXXVIII-4-C26-7-2012.pdf

(Note: I wasn't involved in any of this by the way, just know of this work)

Marco

Op 16-4-2018 om 19:23 schreef Tomas Straupis:

> 2018-04-16 19:34 GMT+03:00 Marco Boeringa wrote:
>> No, buildings are not the most interesting. I once generalized all buildings
>> in Denmark. It only reduced the storage by maybe 5%, at the high cost of
>> heavily distorting a large number of them. Most buildings in OSM are in fact
>> already in their most generalized state: just 4 nodes. Unless you think
>> triangles is a suitable representation ;-)
>    Interesting, what algorithm did you use?
>
>    I'm playing around in Vilnius which has urban houses, big block
> houses, industrial zones and old town with lots of connected buildings
> of very irregular shapes.
>    In Vilnius there are 54267 buildings tagged with 366979 vertexes.
>    Clustering them with distance of 5m gets 45810 objects (of course
> with the same number of vertexes).
>    Removing buildings with area < 100 and having neighbours in < 500
> meters I'm left with 28974 buildings with 299224 vertexes.
>    Simplification (amalgamating buildings in the cluster and trying to
> remove edges < 20m) reduces the number of vertexes to 117108.
>    So this is much more than 5%.
>    There are still a lot of problems (no triangles:), but I do not
> expect number of vertexes to rise considerably.
>
>    Even "dumb" generalisation (st_buffer+- with join=mitter) reduces
> vertex count by ~25%.
>
>    Reducing storage/tile size is not the only/main purpose of generalisation.
>


_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Christoph Hormann-2
On Wednesday 02 May 2018, Marco Boeringa wrote:
> [...]
>
> However, as to interesting stuff to read: our national Kadaster of
> the Netherlands, is actually the very first and only national mapping
> organization world wide, that has successfully managed to implement a
> fully automated generalization work flow for generating 1:50k maps
> from 1:10k maps, [...]

That is largely marketing hyperbole.  The fact that no one tends to
define the term "fully automated" and "successfully" in such context
should serve as a warning.

Institutional mapping in various countries has used algorithmic geometry
processing in production of maps for quite some time but most of them
(including the ones in the Netherlands) tend to still maintain a
traditional view of the cartographic process.  They are for example
speaking of a two year update interval which would be quite curious if
the processes were indeed fully automated according to the common
understanding of this term.

And also it is ultimately: Demo or it didn't happen - at the moment the
only thing you can get at 1:50k is the old style map:

https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topraster/topraster-actueel/top50raster

and the processed geometry data set (without any labeling information):

https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topnl/topnl-actueel/top50nl

Without a styled map rendering this is not really something you can
seriously evaluate (although you can see quite a few cases of geometric
incompatibilities in the geometries).

--
Christoph Hormann
http://www.imagico.de/

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Tomas Straupis
In reply to this post by Marco Boeringa
Hello

2018-05-02 19:33 GMT+03:00 Marco Boeringa wrote:
> The generalization I wrote about was just a crude basic generalization of
> vector (building) data of OSM using a default tool of ESRI's ArcGIS. The
> specific tool used (Simplify Polygon), has more advanced settings than
> standard Douglas Peucker, but by itself does nothing really special other
> than weeding out vertices / nodes. I just attempted to use it with different
> tolerances to see what the results would be, and concluded the resulting
> defects in building topology, were not worth the reduction in file size.

  You've probably used this:
  http://desktop.arcgis.com/en/arcmap/10.3/tools/cartography-toolbox/simplify-polygon.htm

  And I'm talking about this:
  http://desktop.arcgis.com/en/arcmap/10.3/tools/coverage-toolbox/simplify-building.htm

  But both of these are closed and tied to their proprietary
architecture. And there is even less information on implementation
than in first(?) scientific paper about actual building generalisation
algorithms: Sester M. ( 2000 ) Generalization Based on Least Squares
Adjustment In: ISPRS (ed.)

> However, as to interesting stuff to read: our national Kadaster of the
> Netherlands, is actually the very first and only national mapping
> organization world wide, that has successfully managed to implement a fully
> automated generalization work flow for generating 1:50k maps from 1:10k
> maps, including landuses, waterways, highways, but also generalizing
> build-up areas and buildings. They used a range of cartographic
> generalization tools from ArcGIS (that I didn't use...).

  Congratulations to national Kadaster, but I'm not sure you're
correct about "first and only". Our local (Lithuanian) land agency (or
to be more specific gis-centras) has completed automated
generalisation 1-2 years ago (using esri tools as well). As far as I
know fully automated and done in ~day.

  Most GIS people use a work by Sandro Savino "A solution to the
problem of the generalization of the Italian geographical databases
from large to medium scale: approach definition, process design and
operators implementation". Author claims to have completed automated
generalisation for Italy and it dates to 2011. This work is very
interesting because instead of referring to closed commercial tools it
has a very detailed description of how to actually do this and that.

  Also Swiss Topo is known to be doing a very high quality
generalisation for years(?).

  But thank you for your links, it is interesting to learn how
different countries handle generalisation.

--
Tomas

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Marco Boeringa
In reply to this post by Christoph Hormann-2
Christoph,

This is a bit like the Vatican saying to Galileo that the earth doesn't
spin around the sun, but the other way around...

Have you even looked at the links I provided? I can assure you (living
in the Netherlands myself, I think I have better appreciation of this
specific effort), that this is no "marketing hyperbole".

They did fully automate the generalization process from a 1:10k base to
1:25k and 1:50k. The two years update interval is the amount of time
needed to update the base 1:10k TOP10NL vector map.

Even if you fully automate the generalization process to produce medium
and small scale maps from large scale ones, you still need time to
update the base 1:10k large scale one. Two years updating means they fly
each part of the entire country - capturing 1:5k high resolution stereo
aerials - in that period of time. That two year cycle is a huge
achievement, in most countries, the update cycle of topographic sheets
is minimum 5-10 years, in some cases 25 years..., meaning it takes e.g.
a minimum 5-10 years before a certain publicized map sheet, is updated
to the latest on the ground state as captured by new aerials (which may
already be outdated by the time they are truly processed to map sheets).
The two year cycle in the Netherlands, is in fact to a large extent the
result of the automated generalization employed for the medium and small
scale maps, freeing up workforce to maintain the base 1:10k TOP10NL
vector map, instead of needing to maintain multiple map series concurrently.

"And also it is ultimately: Demo or it didn't happen - at the moment the
only thing you can get at 1:50k is the old style map:
https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topraster/topraster-actueel/top50raster"

This is a serious misunderstanding. This is the a rasterized version of
the new style 1:50k vector map based on generalization, that just
happens to look very close to the originally manually generalized one:
that was the whole target of the effort. They make available the rasters
for clients, as it is just an easy way to consume the data, styled and all.

"and the processed geometry data set (without any labeling information):
https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topnl/topnl-actueel/top50nl"

And here you are actually pointing out one of the 1:50k vector products
(in GML format) that they make available based on the described new work
flows, so I don't understand your argument?...

Marco

 

Op 2-5-2018 om 21:02 schreef Christoph Hormann:

> On Wednesday 02 May 2018, Marco Boeringa wrote:
>> [...]
>>
>> However, as to interesting stuff to read: our national Kadaster of
>> the Netherlands, is actually the very first and only national mapping
>> organization world wide, that has successfully managed to implement a
>> fully automated generalization work flow for generating 1:50k maps
>> from 1:10k maps, [...]
> That is largely marketing hyperbole.  The fact that no one tends to
> define the term "fully automated" and "successfully" in such context
> should serve as a warning.
>
> Institutional mapping in various countries has used algorithmic geometry
> processing in production of maps for quite some time but most of them
> (including the ones in the Netherlands) tend to still maintain a
> traditional view of the cartographic process.  They are for example
> speaking of a two year update interval which would be quite curious if
> the processes were indeed fully automated according to the common
> understanding of this term.
>
> And also it is ultimately: Demo or it didn't happen - at the moment the
> only thing you can get at 1:50k is the old style map:
>
> https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topraster/topraster-actueel/top50raster
>
> and the processed geometry data set (without any labeling information):
>
> https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topnl/topnl-actueel/top50nl
>
> Without a styled map rendering this is not really something you can
> seriously evaluate (although you can see quite a few cases of geometric
> incompatibilities in the geometries).
>


_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Marco Boeringa
In reply to this post by Tomas Straupis
Hi Tomas,

You do realize the 1-2 years is well after the 2013 date that the Dutch
Kadastre started to publish their work?

As to Lithuania, I can't speak for your country, but your Swedish Baltic
brethren actually adopted the Dutch Kadaster's approach, including the
developed models through a cooperation agreement:

https://kartographie.geo.tu-dresden.de/downloads/ica-gen/symposium2015/Sweden_Abstract_NMA_Workshop_Amsterdam_Dec_2015.pdf

Maybe your Lithuanian cadastre looked over the shoulders of the Sweeds?
At the very least, they may have gotten a little inspiration... ;-),
although they may well have developed this entirely on their own using
the same ESRI tools. For sure, the Dutch Kadaster seems to have been
very open about their specific development work in an international
context...

Marco


Op 2-5-2018 om 22:24 schreef Tomas Straupis:

>
>> However, as to interesting stuff to read: our national Kadaster of the
>> Netherlands, is actually the very first and only national mapping
>> organization world wide, that has successfully managed to implement a fully
>> automated generalization work flow for generating 1:50k maps from 1:10k
>> maps, including landuses, waterways, highways, but also generalizing
>> build-up areas and buildings. They used a range of cartographic
>> generalization tools from ArcGIS (that I didn't use...).
>    Congratulations to national Kadaster, but I'm not sure you're
> correct about "first and only". Our local (Lithuanian) land agency (or
> to be more specific gis-centras) has completed automated
> generalisation 1-2 years ago (using esri tools as well). As far as I
> know fully automated and done in ~day.
>
>    Most GIS people use a work by Sandro Savino "A solution to the
> problem of the generalization of the Italian geographical databases
> from large to medium scale: approach definition, process design and
> operators implementation". Author claims to have completed automated
> generalisation for Italy and it dates to 2011. This work is very
> interesting because instead of referring to closed commercial tools it
> has a very detailed description of how to actually do this and that.
>
>    Also Swiss Topo is known to be doing a very high quality
> generalisation for years(?).
>
>    But thank you for your links, it is interesting to learn how
> different countries handle generalisation.
>


_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
Reply | Threaded
Open this post in threaded view
|

Re: Generalisation

Christoph Hormann-2
In reply to this post by Marco Boeringa
On Wednesday 02 May 2018, Marco Boeringa wrote:

>
> "And also it is ultimately: Demo or it didn't happen - at the moment
> the only thing you can get at 1:50k is the old style map:
> https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topo
>grafie/topraster/topraster-actueel/top50raster"
>
> This is a serious misunderstanding. This is the a rasterized version
> of the new style 1:50k vector map based on generalization, that just
> happens to look very close to the originally manually generalized
> one: that was the whole target of the effort. They make available the
> rasters for clients, as it is just an easy way to consume the data,
> styled and all.

If that is the result of what you so boldly described as the first fully
automated generalization workflow of a national mapping organization
that would be fairly underwhelming - both in terms of "fully automated"
and "successful".

--
Christoph Hormann
http://www.imagico.de/

_______________________________________________
dev mailing list
[hidden email]
https://lists.openstreetmap.org/listinfo/dev
12