We have over the last 12 months starting to look extensively into how we can leverage AI / Deep Learning to help improve OpenStreetMap and today we want to provide a few details about how we envision the future of making maps and also share more on what we are already doing. We see the emergence of self-driving vehicles as a game-changer and one key requirement for those vehicles is accurate and up-to-date maps. Currently, commercial map providers map every region around every 12-24 months – in a costly process with high precision and high-cost vehicle, our goal was to achieve maps that are updated on a minute basis and with key streets covered at least once every day. This is the goal we set out to solve with OSM in supporting to make it ready for this use case.
Using OSM for Navigation Maps
At Telenav (and before at skobbler) I’ve been actively involved in OSM for almost 10 years now and it is truly unbelievable how OSM has grown massively in that period from a map that was used mostly by passionate enthusiasts to a map that is used by 100s of millions of users and big companies such as Toyota, Tripadvisor or Apple to just name a few to power their consumer products. Despite this success, we have still seen that for navigation maps many additional attributes are needed that are not that well covered in OSM such as Signposts, Speed limits, Turn restrictions or Lane Information is needed to provide the best possible guidance.
Speed limit coverage
Turn restriction coverage
Turn restriction coverage in the United States
What we have done especially to close the turn restriction gap is to use (anonymized) GPS probe data from our millions of collaborators and from partners like Inrix to detect where there are likely turn restrictions based on turn behavior. This data is then shared with the community via ImproveOSM and also for the most likely cases, we put a high penalty on turns for our users so they avoid those maneuvers if possible. This way we have been able to detect 139,181 turn restrictions and increased coverage in a meaningful way.
Next step: Higher accuracy with Computer Vision
With Speed Limits, Lanes, and Sign Posts it is significantly more tricky as it is not possible to identify those purely from GPS probe data. This is the reason why we started our OpenStreetView project to capture those images as there was no truly open project for Streetlevel Imagery that we could use (when we approached Mapillary they asked for hundreds of thousands of dollars in license fees – which was not an option for us).
In parallel to the OpenStreetView projects, we have invested a lot in Computer Vision algorithms and established a cooperation with the Technical University in Cluj to get there over 15 years in the field. Our goal was to use computer vision to automatically build maps based on these images.
In the last year, we made very significant progress, and now we are able to detect Speed Limits, Turn Signs, and Signposts (incl. OCR the text in those signs). Those detections when made will be reviewed by our editors and added directly into OSM.
<Slideshow with our computervision images for detecting turn signs, OCR, speed limits>
Input picture
Panel detection
Glyph segmentation
Character grouping into words
OCR and classification results
We have to build a map editor that allows us internally to review those changes and add them with our team of 20+ mappers to OSM.
We have by now added 19,798 map features (turn-restrictions,one-ways, signs) to OSM using this tool, and are adding every week hundreds of new turn restrictions and other signs to the map to make it better.
<MAP EDITOR TOOL SCREENSHOTS>
Map editor tool
Map editor tool
Advanced level: Create High Accuracy maps (ADAS / HD maps)
The next level for this challenge was to create the high-accuracy maps needed by self-driving cars and for ADAS (Advanced Driver Assistance System) applications. Those maps need accuracy < 2m which typically OSM doesn’t provide consistently and which is a big challenge to achieve purely based on GPS probes as we learned through a lot of trial and error. We looked into how we can achieve better accuracy and our natural choice was to leverage car data that is available to achieve higher accuracy. Therefore we integrated our OpenStreetView application via an OBD2 port (which is available on every car manufactured in the last ~20 years) to integrate our phone-based data with data coming directly from the car (such as speed, or on some models even with steering wheel angle available via OpenXC). With this, we have been able to achieve an accuracy that is 5-10x higher than purely achievable by Phone-based GPS, and with several passes on one road, we can create truly high accuracy maps. P ENHANCEMENTS FROM HARALD>
Trip Enhancement
Our vision of the future of map-making:
We believe if enough users help to record the necessary images via OpenStreetView maps can be created in near real-time at unprecedented accuracy. This would be a major enabler for self-driving cars and update navigation systems. In order to make that possible, we are also in the early stages of working with several car manufacturers to use the data from their onboard cameras in the future for those detections, and hopefully, this way we can use millions of cars from our OEM partners in the future with this technology to enhance maps and share this data with the OSM community to create even higher quality maps than today.
We will over the next few weeks go into this blog deeper into the individual modules that we built for making this future happen and looking also forward to feedback from the community.
<TEAMPICTURE OF TELENAV OSM TEAM>
OpenStreetView team


One thought on “A glimpse into the future of Mapmaking with OSM”
Comments are closed.