ImproveOSM plugin – new features

The Telenav OSM team just released a new version of the ImproveOsm plugin.  We have added a location search box (helpful feature for reaching an area to be mapped) and a button for downloading previously selected ways.

Location search box

A new feature we added to the ImproveOSM plugin is the location search box.  It enables the user to visualize the desired geographical point at a higher zoom level after entering its coordinates in the text field.                     The input of the search box should contain the values for latitude and longitude separated by a comma. If the plugin doesn’t understand your input, an intuitive message is displayed in order to help you. This works similar to the JOSM Jump to position but the search box brings advantages such as the single field for data collecting and more validations for the latitude-longitude values.



Download way button

Another feature is the download way button for the Traffic Flow Direction layer. This enables the user to download the ways of the selected road segments in a new Data layer. This option is only available if the Traffic Flow Detection Layer is active. Furthermore, the button becomes enabled when one or more road segments are selected.

Facebooktwittergoogle_plus

Getting Started with OpenStreetCam

In this post, we would like to guide you in making your very first contribution to OpenStreetCam.

In this post, we would like to guide you in making your very first contribution to OpenStreetCam. There are already 190 million images on OpenStreetCam covering more than 5 million kilometers of road, so obviously getting started is easy enough that it can be done without much guidance 😁 But in case you do need a little encouragement to record your first trip, or just want to see how it works before you try it yourself, read on.

The first thing you want to do is download the free app. OpenStreetCam apps exist for Android and iOS.

The OpenStreetCam app in the iPhone app store

When you first run the app, it will give a quick introduction about OpenStreetCam. Flick through that to get to the main screen. Then go to your Profile, where you can log in.

You can create an OpenStreetCam account by logging in with your existing OpenStreetMap, Facebook or Google accounts. You won’t need to create a separate password.

After logging in through the platform of your choice, you will see your new OpenStreetCam profile 🙂

Your profile will look a little empty compared to mine, but we are here to change that! Let’s go out and drive some.

You will need some sort of phone mount in your car so you can point the camera straight ahead with a clear and unobstructed view of the road ahead. I use an iOttie brand mount (an older version of this one) but any mount that will hold your phone in landscape mode reliably will do.

You will also want to connect your phone to power. We have spent a lot of time optimizing the app, but the recording still drains the battery quite fast.

Okay, we’re almost ready to go. We just need to start recording mode so the app will start taking pictures as you start moving. Before you do that though, take a moment to scroll around the map looking for streets that have no purple lines. That means that nobody has captured any images there yet, so those streets are extra valuable. (You get 10x points for them too.)

When you’re done with that, press the blue camera button to start recording mode.

You may notice that the app mentions a thing called ‘OBD’. This refers to a port in your car that transmits data about the current state of the vehicle. Using a compatible OBD dongle, OpenStreetCam can use this for improved location accuracy. This is optional but gives you twice the points if detected! If you want to learn more, drop us a line.

The app will not immediately start taking pictures. Using your phone’s built in sensors, it will detect when you start and stop moving. As long as you’re stopped, no pictures are taken. This saves space, time spent uploading, and mappers wading through duplicate images of the same location.

In recording mode, you can switch between a big camera view and a small minimap, or the other way around. You switch by tapping on the minimap / mini camera in the left bottom.

As you drive, you will see your points increase as well as some other basic trip stats like number of pictures taken, space used, kilometers driven and recording time. As I mentioned before, roads that nobody has captured before are worth 10x the points. As you collect more points, you get higher up in the leaderboards and level up!

When you’re done driving, hit the record button to end the trip. You will now see a summary screen for your trip, showing where, how long and how far you’ve driven, as well as how many points you have collected on this trip.

Now that you’re done collecting your first images, it’s time to upload them to OpenStreetCam. This does not happen automatically by default (but you can go into settings to change that.) So as soon as you’re connected to wi-fi, go back to the app and go to ‘Upload’. There you will see the trip you just created.

You can tap on the trip to get more details. One cool thing you can do is ‘scrub’ through the trip.

Tap ‘Upload all’ in the top right corner to upload. You will notice that the file size is actually relatively small. That is because internally, the app compresses the images into a video stream that is unpacked into separate photos again at the server side, saving you time and upload bandwidth.

Once the upload is finished, you can go to openstreetcam.org and log in there. Use the same login method you used in the app, so if you used OpenStreetMap to log in on the app, use OpenStreetMap as your login provider on the web site as well.

Once you’re logged in, you can go to your profile to see your trip.

You can click on the trip to see the uploaded images. (Here is the trip I recorded for this demonstration.)

Notice that I should have wiped the snow ❄ off my car before I started recording.. 😬

Finally, I highlighted two icons you see on the lft hand side of your trip detail window. If you are in the U.S., you may see a number badges. They indicate how many street signs were recognized (the bottom one) and how many of those represent data that doesn’t seem to exist in OSM yet. That’s for a future post though!

Signs detected from my trip images. More on that in a future post!


Facebooktwittergoogle_plus

Help Train OpenStreetCam’s Open Sign Detection Platform

Telenav open-sourced the machine learning based sign detection platform that powers the automatic detection of nearly 100 sign types in the  OpenStreetCam images you contributed. You can already see these detections in the latest version of the OpenStreetCam JOSM plugin to help you map, and iD integration will come soon as well. 

Machine learning gets better with training. The more known instances of a particular sign that are fed into the system, the more reliable the automatic detections for that sign type will become.

Our Map Team has spent thousands of hours manually tagging and validating traffic signs in images, and the resulting training data is open source as well. But did you know you can help improve the detection system yourself as well? Let us show you how.

If you go to the trip details on the OpenStreetCam web site, you will see three ‘tabs’ on the left. The first one takes you to the main trip info. The second one takes you to an OSM edit mode, that lets you quickly go over detections and see if they need to be added to OSM. (Separate post! The third tab is the sign validation mode. If the tab icon has a number with it, there are unverified signs to work on.

The detection validation mode on the OpenStreetCam web site

The bottom part of the screen shows all detected signs. The ones that have been validated already will have a green checkmark with them. The ones that have been invalidated will have a red ‘X’. 

You can validate or invalidate the automatic detection if the sign on the image exactly matches / doesn’t match the automatic detection, by clicking the corresponding button on the left. 

Power Validator Workflow

You can validate entire trips with many detected signs very quickly by using some of the power functions available:

  • Next to the trip slider, underneath the image, you will find a small magnifying glass button. Clicking this will automatically zoom and pan the image to the detection
  • Use Cmd (Mac) / Alt (Windows / Linux) and the left and right arrows to quickly jump to the next detection
  • Use Cmd / Alt up and down to validate or invalidate the currently highlighted detection.
Skipping through detections quickly using shortcut keys Cmd / Alt up and down

Facebooktwittergoogle_plus

Summer Dispatch From The Telenav Map Team

It has been an exciting summer! Besides our regular work, there was the annual State of the Map conference that we were all really looking forward to. We launched a new ImproveOSM web site. OpenStreetCam dash-cams are distributed to OSM US members. And more. Read all about it in our Summer Dispatch below!

State of the Map

Quite a few of us got to go to State of the Map in Milan, Italy! Our team hosted four presentations at the conference, and we are really happy with the interest and feedback we received. We made a lot of new map friends as well!

All SOTM presentations were recorded and posted on YouTube, so if you missed any of us, you can watch the presentations at your leisure:

Alina and Bogdan presenting our Machine Learning stack at SOTM 2018

We also had a booth at the conference where we talked about ImproveOSM and OpenStreetCam, and where 6 lucky winners received a Waylens OpenStreetCam dashboard camera!

Excited crowd right before one of the Waylens cameras is being given away!

Mapping

We continue to map in Canada, the United States, and Mexico. As always you can track our work on GitHub. We have been focusing a lot on adding missing road names for the larger metropolitan areas in the US. Our typical workflow is to identify local government road centerline data sources, verify the license, process them with Cygnus to find changed / new names, and manually add the names if we can verify them.

Local road centerline data the team identified in Colorado

We are excited that the US community is looking to build an overview of available road centerline databases from (local) governments. We hope the ones we identified can help bootstrap this initiative.

We also published some MapRoulette challenges around this topic. 

ImproveOSM

Right on time for State of the Map, we launched a complete redesign of improveosm.org, our portal for everything Telenav❤️OSM. The new site gives you quick access to our OSM initiatives, data and tools. Check it out!We also released more than 20 thousand new missing roads locations. These are added to the existing database of currently more than 2.4 million missing road locations. An easy way to start editing based on these locations is to download the ImproveOSM plugin for JOSM.

Locations of the new Missing Roads locations

OpenStreetCam

The steady growth of OpenStreetCam continues. Almost 4.5 million kilometers of trips are in the OSC database. This amounts to about 165 million images!

We started a collaboration with OpenStreetMap US to run a Camera Lending program. Through the program, OSM US members can apply to borrow a custom Waylens Horizon camera for up to three months. The camera captures high resolution images for OSC and uploads them automatically. Almost 20 mappers have a camera already, and they have driven about 30 thousand kilometers in the past couple of months!

The passenger’s seat of our Camera Man ToeBee, as he gets ready to dispatch a bunch of Waylens cameras

That’s a wrap for our summer dispatch folks! Thanks for reading and keep an eye on the blog for more from the Telenav Map Team. Be sure to follow us on Twitter as well @improveOSM and @openstreetcam. 👋🏼

 

Facebooktwittergoogle_plus

The Future of Map-Making is Open and Powered by Sensors and AI

The tools of digital map-making today look nothing like those we had even a decade ago. Driven by a mix of grassroots energy and passion combined with innovations in technology, we have seen a rapid evolution marked by three inflection points: the dawn of consumer GPS, availability of high-resolution aerial imagery at scale, and lastly a shift to large scale AI powered map-making tools in which we find ourselves today.

Automatically detecting salient features from open street-level imagery could accelerate map-making by a factor 10

For OpenStreetMap (OSM), the availability of affordable and accurate consumer GPS devices was a key enabler in 2004, when Steve Coast and an emergent community of trailblazers (literally!) biked around, captured GPS traces, and created the first version of the map using rudimentary tooling.

The wide availability of high-resolution, up-to-date aerial and satellite imagery became the next map-making game changer around 2009-2010. It empowered people worldwide to contribute to the map, not just in places they knew, but anywhere in the world where imagery was available. This led to the rapid growth of mappers worldwide and the further expansion of a global map, aiding notable humanitarian support efforts, such as the enormous mapping response immediately following the 2010 earthquake in Haiti.

Fast forward to today, and we find ourselves in the midst of yet another massive change in map-making, this time fueled by the ubiquity of sensors, artificial intelligence (AI), and machine learning (ML). The three-prong combo of the availability of mature software frameworks, a thriving developer and research community, and commoditized GPU-based hardware enable an unprecedented wave of AI-powered technology for consumers as well as businesses.

It did not take long for the map-making community to harness this power and begin applying it to ortho- and street-level imagery to automate the generation of observed changes to the map. When directed to the human mapping community, these outcomes will reduce, without a doubt, the effort to create and enhance maps by a factor 10.

At Telenav, we have jumped on this trend early building and growing OpenStreetCam as have others with a stake in OSM, such as Facebook.

An important element, however, has been holding back a more rapid adoption and perfection of machine learning-based map generation: the lack of openness in the space. For various reasons, both data and software have largely been kept in silos and have not been open to contributions by the community. In our view, creating an open ecosystem around new map-making technology is vital – openness and creativity are what made OSM a success in the first place, because mappers could capture what they deeply cared about.

We are convinced that an open ecosystem around machine learning for map-making is the only way to ensure that this technology can be embraced and appropriated by the community. To that end, Telenav is opening up three key components of the OpenStreetCam stack:

  • A training set of images. We have invested more than five-man years creating a training set of street-level images aimed at common road signs. The set consists of well over 50,000 images, which will be available to anybody under a CC-BY-SA license. We will continue to manually annotate images to double this set by the end of 2018, by which time it will be the largest set of truly open images.
  • Core machine-learning technology. Currently, our stack detects more than 20 different types of signs and traffic lights. We will continue to develop the system to add features important to the navigation and driving-use cases, such as road markings including lanes.
  • Detection results. Lastly, we will release all results from running the stack on the more than 140-million street-level images already in OpenStreetCam to the OSM community as a layer to enhance the map.

You can find everything mentioned above in the Telenav.AI repository on Github.

Our hope is that opening our stack and data will enable others to enhance both the training sets as well as the detection nets and put them to new, creative uses that fulfill the needs and wants of the diverse mapmaker and map-user communities.

Additionally, by openly licensing the data and software, we want to make sure that the next era of mapmaking with OSM remains open and accessible to everyone and fosters the creation of a new generation of mappers.

To celebrate this milestone and to empower the community to run their own improvements to this stack on suitable hardware that is otherwise cost prohibitive, we are kicking off a competition around our training data and software stack, aimed at improving the quality of detections.

The winners will be able to run their detections on our cloud infrastructure against the more than 140-million images currently on OpenStreetCam, and of course release the improved and enhanced detection stack for all mappers to improve OSM. (Oh, and there’s $10,000 in prize money as well!!!!)

In the longer term, we will be releasing more parts of the map making technology stack that we are building to further enable OSM’s growth and expansion, and in order that it plays, over time, a central role in powering autonomous driving.

So, stay tuned for more from Telenav!Facebooktwittergoogle_plus

Map Metrics for OSM are now available

Telenav’s OSM team just released a portal where you can view different metrics on OSM.

Unlike other metrics views that are already available, this new tool for the OSM community is focused especially on navigation attributes like length of navigable roads, number of turn restrictions, signposts and many more, in total 22 of such metrics are available. You can check it out at https://metrics.improveosm.org

About the data

Metrics are computed weekly and should be available on the portal at the end of each week. Metrics are generated for the whole world using as input the planet pbf downloaded from the official mirrors made available by OSM community

Metrics are available starting with 8th February 2016. In the top left corner, you can choose to see them by week, by month or by quarter. We also have a nice feature for all OSM enthusiasts! For each metric in the left menu you have a small info button where you see exactly what the metric means: complete description and the rules we applied when computing them, which tags where used, if we counted ways, nodes or relations etc.

How do we do it?

The platform was built using Apache Spark. Using big data technologies enabled us to have metrics for the whole world: on countries, states, counties and a few metropolitan areas (metros are available only in North America for now). In order to use Apache Spark, we had to convert pbf to parquet first, so we achieved this using a parquetizer that is open source and can be found here.  After we have the parquets, using Spark’s DataFrame API we managed to have these metrics available in just a couple of hours.

We have also made the latest parquet files available for general use here.

If you have any suggestions or feedback, please do not hesitate to contact us. You can find details in the About section.

Happy mapping!Facebooktwittergoogle_plus

New version of OpenStreetCam JOSM plugin with sign detections

This post also appears on my OSM diary.

The Telenav OSM team just released a new version of the OpenStreetCam JOSM plugin. The major new feature is the ability to show and manipulate street sign detections. Images in only a few areas are currently processed for sign detection, so it’s not very likely that you will see anything yet, but that will change over time as we catch up processing over 140 million images.

screen

To enable detections, right-click on the OpenStreetCam layer in the Layers panel, and check ‘Detections’ under ‘Data to display’. You can filter the detections by the following criteria:

  • Not older than — show only detections (or images) from that date or newer.
  • Only mine — show only detections / images from my own OSM / OSC account.
  • OSM Comparison — show detections based on comparison with OSM data:
    • Same data — Only show signs that have corresponding tags / data already mapped in OSM
    • New data — Only show signs that do not have corresponding data in OSM and need to be mapped
    • Changed data — Only show signs that have existing tags in OSM but the value is different (for example a 50 km/h sign and the OSM way is mapped as 60 km/h)
    • Unknown — No match could be made between the detected sign and OSM data
  • Edit status — show detections based on manually set status of the detection:
    • Open — new detection, status not changed yet
    • Mapped — manually marked as mapped
    • Bad sign — manually marked as a bad detection
    • Other — other status
  • Detection type — show only signs of the selected types.
  • Mode — Show only automatic detections, manually tagged detections, or both.

For the filters OSM Comparison, Edit status and Detection type, you can select multiple values by using shift-click and command/ctrl-click.

In the main editor window, you can select a sign to load the corresponding photo, which will show an outline of the detected sign. If there are multiple signs in an image, you can select the next one by clicking on the location again. (This is something we hope to improve.)

panel

In the new ‘OpenStreetMap detections’ panel, you can see metadata for the detection, and set the status to Mapped, Bad Detection, or Other. By marking signs that are not detected correctly as Bad Detection, you hide them from other mappers, and we will use that information to improve the detection system.

The plugin is available from the JOSM plugin list, and the source is on Github.Facebooktwittergoogle_plus

Geohash JOSM plug-in

The Telenav OSM team has a new JOSM plugin for you: Geohash. This plug-in displays a layer on top of the JOSM map that contains the corresponding geohashes, up to zoom level 10. It also allows searching for a specific geohash and moves the map to the corresponding area.

Our team has been using this plugin internally and thought it may be useful for some of you as well.

This plug-in can be used by those who work in specific areas based on geohash units.

How to use the Geohash plug-in

The geohashes are automatically generated based on the user map view and zoom level. Increased zoom means increased depth level for geohashes.

To search for a geohash, use the Geohash plug-in dialog. Type or paste the geohash code in the text field and press the Search button. If the code is invalid, a message will be shown. Otherwise, the map view will be moved and zoomed over the selected geohash.

To clear generated geohashes, double click a geohash and all geohashes from his parent will be cleared. Also, right click on Geohash layer will show the option to ‘Clear geohashes’ that will leave only the depth one grid.

You can find the source code here: https://github.com/Telenav/geohash-plugin

We are looking forward to your feedback!Facebooktwittergoogle_plus

Working with ImproveOSM Data Dumps

Our ImproveOSM pipeline produces a pretty impressive number of suggested roads missing from OSM, missing oneway tags, and missing turn restrictions, based on analysis of billions of GPS data points. We make the results available as frequent data dumps in CSV format. In this post, I want to look at a way to integrate this data into your OSM mapping workflow.

If you just want to see ImproveOSM data in JOSM wherever you are currently mapping, you can just use the ImproveOSM JOSM plugin. For advanced users who want more flexibility, or who want to use this data in different ways, this post offers some guidance.

The data dumps are available from here. For this example, I will work with the most recent Direction of Flow data file. This highlights ways with potential missing oneway tag. After downloading and unzipping it, you will have a CSV file of about 16.5 megabytes that looks like this:

wayId;fromNodeId;toNodeId;percentage;status;roadType;theGeom;numberOfTrips
148617028;1867720648;89191396;99.5378927911275;SOLVED;THROUGHWAY;LINESTRING(2.217821 48.922613,2.217719 48.922618,2.217408 48.922633);1082
33555379;322840377;322840383;98.6301369863014;INVALID;LOCAL_ROAD;LINESTRING(4.999815 47.34294,4.999957 47.343062,4.999965 47.34315);146
17271190;178942503;2341050872;100;OPEN;LOCAL_ROAD;LINESTRING(11.070503 50.139245,11.070525 50.139213,11.070616 50.139099,11.070693 50.139032);74
.....

Since the theGeom field is in WKT, you can import it as a layer in QGIS pretty easily. Let’s fire up QGIS (I use 2.18) and add a Delimited Text layer.

In the dialog, select the downloaded CSV file as the file source. Set the delimiter to semicolon. QGIS detected for me that the geometry was in the theGeom field, and of type WKT, but you can set that manually if needed:

Upon clicking OK, QGIS wants us to define which CRS the coordinates are defined in. Select WGS84.

Now, we have a layer of line geometries that correspond to OSM ways that may be missing a oneway tag.

To make the file more manageable, let’s limit our selection to one country. I get country boundaries from Natural Earth (a fantastic resource!). After adding the country borders to QGIS, I can perform a spatial query. Before you do this, select the country you are interested in. I pick Mexico as an example.

Bring up the Spatial Query window. If you don’t see this menu item, you will need to enable the Spatial Query plugin.

Select the ImproveOSM layer as the source, and the Natural Earth layer as the query layer. Make sure to check the ‘1 Selected geometries’ checkbox, so we limit our query to Mexico.

The matching features will now be selected in the ImproveOSM layer. Make sure that layer is selected in the Layers Panel before you select Layer -> Save As.. from the QGIS menu. In this dialog, choose GeoJSON as the output type. Select a destination filename. Make sure that the CRS is set to WGS84. Make sure the ‘Save only selected features’ is checked, and Save.

Now you have a GeoJSON file with all OSM way geometries that may need a oneway tag. You can load this file into JOSM, using its GeoJSON plugin. To organize your work going through these, I would recommend using the Todo plugin and add the GeoJSON features to the todo list.Facebooktwittergoogle_plus

Detecting Traffic Signs in OpenStreetCam

OpenStreetCam’s mission is to help you improve OSM with street-view imagery. Photos taken with regular smartphones seem to be good enough for capturing map features like traffic signs, lanes or crosswalks. However, browsing the 120 million+ photos in OSC to find relevant things to map will take a while. The human factor is fundamental to OSM’s culture and we don’t see that changing, but we want to make editing street related attributes more efficient with automation.

We’re happy to announce a beta release of the traffic signs recognition on OpenStreetCam photos, made possible with machine learning. We processed a few million photos and detected around 500.000 traffic signs so far, currently available for tracks in several areas in United States and Canada. We’re working on extending the training sets and optimize the processing so that the area’s soon expanded.

What’s new from a user perspective: the track page on openstreetcam.org will now show detected traffic signs when available:

There’s a preview list of all detections in the track, detection overlays on photos and, of course, filters. Filters might now get a rep as something really exciting, but we’re excited about one of ours: the OSM status. Here’s why: after detecting a sign we compare it to the corresponding OSM feature and check if they’re consistent. Based on that, filtering is available.

For a practical example, let’s take speed limits: Instead of manually cross checking every detection with the maxspeed tag in OSM, one can only review detections where presumably maxspeed is not set or the value’s different in OSM. Just tick the Need review in OSM box.

Here are a few more examples of trips that have already been processed with our sign detections.

What’s next?

We’re busy working on a few things:

  • Scale the training sets and pipeline to extend the supported areas.
  • Traffic signs integration in the JOSM plugin.
  • Tagging new traffic signs support in the webpage.

If you like what we do and want to help:

  • First and foremost, you can use detections to improve OSM. If you’re seeing detections on tracks check them out, see what needs reviewing in OSM and edit. You can open iD or JOSM to photo’s location straight from the webpage.
  • Help us improve the traffic signs recognition. There’s a chance you will find some bad detections. You can review them and flag whether they’re good or bad, see the two buttons above the photo. We’re adding those reviews to training sets to improve recognitions, so please play nice.
  • Help us add these detections to the iD editor as well.

Tip: you can navigate between detections with Ctrl/Cmd + right/left arrows and confirm/invalidate with Ctrl/Cmd + up/down arrows. Goes pretty fast.

Facebooktwittergoogle_plus