The Future of Map-Making is Open and Powered by Sensors and AI

The tools of digital map-making today look nothing like those we had even a decade ago. Driven by a mix of grassroots energy and passion combined with innovations in technology, we have seen a rapid evolution marked by three inflection points: the dawn of consumer GPS, availability of high-resolution aerial imagery at scale, and lastly a shift to large scale AI powered map-making tools in which we find ourselves today.

Automatically detecting salient features from open street-level imagery could accelerate map-making by a factor 10

For OpenStreetMap (OSM), the availability of affordable and accurate consumer GPS devices was a key enabler in 2004, when Steve Coast and an emergent community of trailblazers (literally!) biked around, captured GPS traces, and created the first version of the map using rudimentary tooling.

The wide availability of high-resolution, up-to-date aerial and satellite imagery became the next map-making game changer around 2009-2010. It empowered people worldwide to contribute to the map, not just in places they knew, but anywhere in the world where imagery was available. This led to the rapid growth of mappers worldwide and the further expansion of a global map, aiding notable humanitarian support efforts, such as the enormous mapping response immediately following the 2010 earthquake in Haiti.

Fast forward to today, and we find ourselves in the midst of yet another massive change in map-making, this time fueled by the ubiquity of sensors, artificial intelligence (AI), and machine learning (ML). The three-prong combo of the availability of mature software frameworks, a thriving developer and research community, and commoditized GPU-based hardware enable an unprecedented wave of AI-powered technology for consumers as well as businesses.

It did not take long for the map-making community to harness this power and begin applying it to ortho- and street-level imagery to automate the generation of observed changes to the map. When directed to the human mapping community, these outcomes will reduce, without a doubt, the effort to create and enhance maps by a factor 10.

At Telenav, we have jumped on this trend early building and growing OpenStreetCam as have others with a stake in OSM, such as Facebook.

An important element, however, has been holding back a more rapid adoption and perfection of machine learning-based map generation: the lack of openness in the space. For various reasons, both data and software have largely been kept in silos and have not been open to contributions by the community. In our view, creating an open ecosystem around new map-making technology is vital – openness and creativity are what made OSM a success in the first place, because mappers could capture what they deeply cared about.

We are convinced that an open ecosystem around machine learning for map-making is the only way to ensure that this technology can be embraced and appropriated by the community. To that end, Telenav is opening up three key components of the OpenStreetCam stack:

  • A training set of images. We have invested more than five-man years creating a training set of street-level images aimed at common road signs. The set consists of well over 50,000 images, which will be available to anybody under a CC-BY-SA license. We will continue to manually annotate images to double this set by the end of 2018, by which time it will be the largest set of truly open images.
  • Core machine-learning technology. Currently, our stack detects more than 20 different types of signs and traffic lights. We will continue to develop the system to add features important to the navigation and driving-use cases, such as road markings including lanes.
  • Detection results. Lastly, we will release all results from running the stack on the more than 140-million street-level images already in OpenStreetCam to the OSM community as a layer to enhance the map.

You can find everything mentioned above in the Telenav.AI repository on Github.

Our hope is that opening our stack and data will enable others to enhance both the training sets as well as the detection nets and put them to new, creative uses that fulfill the needs and wants of the diverse mapmaker and map-user communities.

Additionally, by openly licensing the data and software, we want to make sure that the next era of mapmaking with OSM remains open and accessible to everyone and fosters the creation of a new generation of mappers.

To celebrate this milestone and to empower the community to run their own improvements to this stack on suitable hardware that is otherwise cost prohibitive, we are kicking off a competition around our training data and software stack, aimed at improving the quality of detections.

The winners will be able to run their detections on our cloud infrastructure against the more than 140-million images currently on OpenStreetCam, and of course release the improved and enhanced detection stack for all mappers to improve OSM. (Oh, and there’s $10,000 in prize money as well!!!!)

In the longer term, we will be releasing more parts of the map making technology stack that we are building to further enable OSM’s growth and expansion, and in order that it plays, over time, a central role in powering autonomous driving.

So, stay tuned for more from Telenav!

Facebooktwitter

A glimpse into the future of Mapmaking with OSM

We have over the last 12 months starting to look extensively into how we can leverage AI / Deep Learning to help improve OpenStreetMap and today we want to provide a few details about how we envision the future of making maps and also share more on what we are already doing. We see the emergence of self-driving vehicles as a game-changer and one key requirement for those vehicles is accurate and up-to-date maps. Currently, commercial map providers map every region around every 12-24 months – in a costly process with high precision and high-cost vehicle, our goal was to achieve maps that are updated on a minute basis and with key streets covered at least once every day. This is the goal we set out to solve with OSM in supporting to make it ready for this use case.

Using OSM for Navigation Maps

At Telenav (and before at skobbler) I’ve been actively involved in OSM for almost 10 years now and it is truly unbelievable how OSM has grown massively in that period from a map that was used mostly by passionate enthusiasts to a map that is used by 100s of millions of users and big companies such as Toyota, Tripadvisor or Apple to just name a few to power their consumer products. Despite this success, we have still seen that for navigation maps many additional attributes are needed that are not that well covered in OSM such as Signposts, Speed limits, Turn restrictions or Lane Information is needed to provide the best possible guidance.

rsz_speed-na-1
Speed limit coverage

rsz_1tr-coverage
Turn restriction coverage

rsz_tr-na-1
Turn restriction coverage in the United States

What we have done especially to close the turn restriction gap is to use (anonymized) GPS probe data from our millions of collaborators and from partners like Inrix to detect where there are likely turn restrictions based on turn behavior. This data is then shared with the community via ImproveOSM and also for the most likely cases, we put a high penalty on turns for our users so they avoid those maneuvers if possible. This way we have been able to detect 139,181 turn restrictions and increased coverage in a meaningful way.

Next step: Higher accuracy with Computer Vision

With Speed Limits, Lanes, and Sign Posts it is significantly more tricky as it is not possible to identify those purely from GPS probe data. This is the reason why we started our OpenStreetView project to capture those images as there was no truly open project for Streetlevel Imagery that we could use (when we approached Mapillary they asked for hundreds of thousands of dollars in license fees – which was not an option for us).

In parallel to the OpenStreetView projects, we have invested a lot in Computer Vision algorithms and established a cooperation with the Technical University in Cluj to get there over 15 years in the field. Our goal was to use computer vision to automatically build maps based on these images.

In the last year, we made very significant progress, and now we are able to detect Speed Limits, Turn Signs, and Signposts (incl. OCR the text in those signs). Those detections when made will be reviewed by our editors and added directly into OSM.

<Slideshow with our computervision images for detecting turn signs, OCR, speed limits>

input-picture-1
Input picture

panel-detection-2
Panel detection

glyph-segmentation
Glyph segmentation

character-grouping-into-words
Character grouping into words

ocr
OCR and classification results

We have to build a map editor that allows us internally to review those changes and add them with our team of 20+ mappers to OSM.

We have by now added 19,798 map features (turn-restrictions,one-ways, signs) to OSM using this tool, and are adding every week hundreds of new turn restrictions and other signs to the map to make it better.

<MAP EDITOR TOOL SCREENSHOTS>

map-editor-1-cropped
Map editor tool

map-editor-2
Map editor tool

Advanced level: Create High Accuracy maps (ADAS / HD maps)

The next level for this challenge was to create the high-accuracy maps needed by self-driving cars and for ADAS (Advanced Driver Assistance System) applications. Those maps need accuracy < 2m which typically OSM doesn’t provide consistently and which is a big challenge to achieve purely based on GPS probes as we learned through a lot of trial and error. We looked into how we can achieve better accuracy and our natural choice was to leverage car data that is available to achieve higher accuracy. Therefore we integrated our OpenStreetView application via an OBD2 port (which is available on every car manufactured in the last ~20 years) to integrate our phone-based data with data coming directly from the car (such as speed, or on some models even with steering wheel angle available via OpenXC). With this, we have been able to achieve an accuracy that is 5-10x higher than purely achievable by Phone-based GPS, and with several passes on one road, we can create truly high accuracy maps. P ENHANCEMENTS FROM HARALD>

bildschirmfoto-2016-11-21-um-14-02-43
Trip Enhancement

Our vision of the future of map-making:

We believe if enough users help to record the necessary images via OpenStreetView maps can be created in near real-time at unprecedented accuracy. This would be a major enabler for self-driving cars and update navigation systems. In order to make that possible, we are also in the early stages of working with several car manufacturers to use the data from their onboard cameras in the future for those detections, and hopefully, this way we can use millions of cars from our OEM partners in the future with this technology to enhance maps and share this data with the OSM community to create even higher quality maps than today.

We will over the next few weeks go into this blog deeper into the individual modules that we built for making this future happen and looking also forward to feedback from the community.

<TEAMPICTURE OF TELENAV OSM TEAM>

thumb_img_8533_1024

team-pic-2

map-2

rsz_img_8680

foto_miriam frederic steve-coast

OpenStreetView team

Facebooktwitter