by AptivLicense : Unknown
The nuScenes dataset (pronounced /nuːsiːnz/) is a public large-scale dataset for autonomous driving provided by nuTonomy-Aptiv. By releasing a subset of our data to the public, we aim to support public research into computer vision and autonomous driving. For this purpose we collected 1000 driving scenes in Boston and Singapore, two cities that are known for their dense traffic and highly challenging driving situations. The scenes of 20 second length are manually selected to show a diverse and interesting set of driving maneuvers, traffic situations and unexpected behaviors. The rich complexity of nuScenes will encourage development of methods that enable safe driving in urban areas with dozens of objects per scene. Gathering data on different continents further allows us to study the generalization of computer vision algorithms across different locations, weather conditions, vehicle types, vegetation, road markings and left versus right hand traffic. To facilitate common computer vision tasks, such as object detection and tracking, we annotate 25 object classes with accurate 3D bounding boxes at 2Hz over the entire dataset. Additionally we annotate object-level attributes such as visibility, activity and pose. The final dataset will include approximately 1.4M camera images, 400k LIDAR sweeps, 1.3M RADAR sweeps and 1.1M object bounding boxes in 40k keyframes. More content will follow soon. Starting in 2019 we will organize challenges on various computer vision tasks to provide a benchmark to state-of-the-art methods. The nuScenes dataset is inspired by the pioneering KITTI dataset. nuScenes is the first large-scale dataset to provide data from the entire sensor suite of an autonomous vehicle (6 cameras, 1 LIDAR, 5 RADAR, GPS, IMU). Compared to KITTI, nuScenes includes 15-20x more keyframes and object annotations. Whereas most of the previously released datasets focus on camera-based object detection (Cityscapes, Mapillary Vistas, Apolloscapes, Berkeley Deep Drive), the goal of nuScenes is to look at the entire sensor suite and find novel techniques for sensor fusion. We also believe that the use of prior knowledge is essential to enable safe autonomous driving. Therefore we release detailed knowledge of the map of the scene that can be used to improve upon purely vision-based solutions. We hope that this dataset will allow researchers across the world to develop safe autonomous driving technology.
CategoriesAutonomous Driving, Human
SensorLIDAR, RADAR, RGB Camera, IMU, GPS