After Competition, OpenStreetCam Can Now Detect Dozens Of Sign Types in Australia and New Zealand

Last December and January, OpenStreetCam held a image collection competition in Australia and in New Zealand. The three mappers in each country who collected the most points during the months of December 2018 and January 2019 could each win a gift card: $100 for the winner, and $25 for the second and third place. We just announced the winners to the communities in both countries. Congratulations to steve91, robbie-bloggs, ConsEbt, david-blyth, ivss-xx, and nicknz!

User-uploaded image: Screenshot 2019-02-06 15.50.05.png
OpenStreetCam coverage in Melbourne after the competition

We decided to do these competitions because we wanted more mappers in Australia and New Zealand to get acquainted with OpenStreetCam, and consider contributing to this free and open platform for street-level images. The more contributions, the more help OpenStreetCam can be for OSM mappers! There weren’t many contributions in either country yet, and if you go to the OpenStreetCam web site, you’ll quickly see that there are still large gaps to fill. Still, OpenStreetCam coverage grew by 800% since the beginning of December.

Head over to the ImproveOSM Blog for a step-by-step guide on how to get started with OpenStreetCam yourself!

User-uploaded image: Screenshot 2019-02-06 15.52.07.png
OpenStreetCam now sports almost 1.5 million images in Australia and New Zealand combined

OSM mappers can use the OpenStreetCam images to help with mapping. You can’t see everything from an aerial image. Signs are a great example of useful mapping information that requires an on the ground perspective. This is where OpenStreetCam is particularly handy, because we detect an increasing diversity of signs that appear on the photos for you automatically, using an open source machine learning platform.

Interested in a more in-depth look at the OpenStreetCam sign recognition AI? Have a look at our talk at State of the Map 2018 in Milan or our talk at State of the Map US last fall!

Using detected signs in JOSM

Results

For the sign detection platform to work and detect a variety of signs reliably, it needs training data. Your contributions during this competition have been invaluable to reach that goal. Our Map Team looked at tens of thousands of images collected by the community during the competition in Australia and New Zealand, and validated more than 160,000 traffic signs found in these images. After feeding that data into the platform, we can now reliably detect more than 80 types of signs in Australia and New Zealand. As we continue to look at more images that you contribute, the system will get smarter and we will detect more different types of signs.

Do you want to help train the OpenStreetCam sign recognition AI? You can do this right from the OpenStreetCam web site. Read all about it in this blog post.

Validating automatic sign detections is a quick an easy way in which anyone can help improve OpenStreetCam’s ability to detect signs reliably
Facebooktwitter

The Future of Map-Making is Open and Powered by Sensors and AI

The tools of digital map-making today look nothing like those we had even a decade ago. Driven by a mix of grassroots energy and passion combined with innovations in technology, we have seen a rapid evolution marked by three inflection points: the dawn of consumer GPS, availability of high-resolution aerial imagery at scale, and lastly a shift to large scale AI powered map-making tools in which we find ourselves today.

Automatically detecting salient features from open street-level imagery could accelerate map-making by a factor 10

For OpenStreetMap (OSM), the availability of affordable and accurate consumer GPS devices was a key enabler in 2004, when Steve Coast and an emergent community of trailblazers (literally!) biked around, captured GPS traces, and created the first version of the map using rudimentary tooling.

The wide availability of high-resolution, up-to-date aerial and satellite imagery became the next map-making game changer around 2009-2010. It empowered people worldwide to contribute to the map, not just in places they knew, but anywhere in the world where imagery was available. This led to the rapid growth of mappers worldwide and the further expansion of a global map, aiding notable humanitarian support efforts, such as the enormous mapping response immediately following the 2010 earthquake in Haiti.

Fast forward to today, and we find ourselves in the midst of yet another massive change in map-making, this time fueled by the ubiquity of sensors, artificial intelligence (AI), and machine learning (ML). The three-prong combo of the availability of mature software frameworks, a thriving developer and research community, and commoditized GPU-based hardware enable an unprecedented wave of AI-powered technology for consumers as well as businesses.

It did not take long for the map-making community to harness this power and begin applying it to ortho- and street-level imagery to automate the generation of observed changes to the map. When directed to the human mapping community, these outcomes will reduce, without a doubt, the effort to create and enhance maps by a factor 10.

At Telenav, we have jumped on this trend early building and growing OpenStreetCam as have others with a stake in OSM, such as Facebook.

An important element, however, has been holding back a more rapid adoption and perfection of machine learning-based map generation: the lack of openness in the space. For various reasons, both data and software have largely been kept in silos and have not been open to contributions by the community. In our view, creating an open ecosystem around new map-making technology is vital – openness and creativity are what made OSM a success in the first place, because mappers could capture what they deeply cared about.

We are convinced that an open ecosystem around machine learning for map-making is the only way to ensure that this technology can be embraced and appropriated by the community. To that end, Telenav is opening up three key components of the OpenStreetCam stack:

  • A training set of images. We have invested more than five-man years creating a training set of street-level images aimed at common road signs. The set consists of well over 50,000 images, which will be available to anybody under a CC-BY-SA license. We will continue to manually annotate images to double this set by the end of 2018, by which time it will be the largest set of truly open images.
  • Core machine-learning technology. Currently, our stack detects more than 20 different types of signs and traffic lights. We will continue to develop the system to add features important to the navigation and driving-use cases, such as road markings including lanes.
  • Detection results. Lastly, we will release all results from running the stack on the more than 140-million street-level images already in OpenStreetCam to the OSM community as a layer to enhance the map.

You can find everything mentioned above in the Telenav.AI repository on Github.

Our hope is that opening our stack and data will enable others to enhance both the training sets as well as the detection nets and put them to new, creative uses that fulfill the needs and wants of the diverse mapmaker and map-user communities.

Additionally, by openly licensing the data and software, we want to make sure that the next era of mapmaking with OSM remains open and accessible to everyone and fosters the creation of a new generation of mappers.

To celebrate this milestone and to empower the community to run their own improvements to this stack on suitable hardware that is otherwise cost prohibitive, we are kicking off a competition around our training data and software stack, aimed at improving the quality of detections.

The winners will be able to run their detections on our cloud infrastructure against the more than 140-million images currently on OpenStreetCam, and of course release the improved and enhanced detection stack for all mappers to improve OSM. (Oh, and there’s $10,000 in prize money as well!!!!)

In the longer term, we will be releasing more parts of the map making technology stack that we are building to further enable OSM’s growth and expansion, and in order that it plays, over time, a central role in powering autonomous driving.

So, stay tuned for more from Telenav!Facebooktwitter