Happy 1 year CARLA: The expected CARLA 0.9.1 release is finally among us!
It has been a long journey for the CARLA simulator. The team started this adventure back at the end of 2016 aiming to bring a state-of-the-art Autonomous Driving simulator to the open source community. Our goals were clear: democratising Autonomous Driving via accessible simulation and creating a platform that enables academics and industry members to share knowledge and results in the open. In November of 2017 we made the first public release of CARLA. At that time, the platform was a humble simulator built around the use cases of camera-based policy learning and data acquisition tasks. A great starting point but not enough. Therefore, we decided to completely redesign the platform to produce a more advanced and flexible autonomous driving simulator, with the following objectives in mind:
We have dedicated the past 10 months to redesign the platform to achieve these objectives. The 0.9.0 release was an initial showcase of this effort, with the introduction of the new CARLA API, the multi-client architecture and the capacity of controlling all vehicles of the simulation at will.
In this 0.9.1 release, we have added critical features to enable the ingestion of new contents (maps) made with external tools and the simple navigation of these maps using a new API based on waypoints and map queries. To this end, we have adopted OpenDrive as the core of CARLA’s road network format. Our map representation is built on top of OpenDrive, allowing for a easy-to-use and versatile navigation API. We also introduce Town03, a complex urban scene with multi-lane roads, tunnels, roundabouts and many other interesting features, as an example of a map generated semi-automatically from an external tool.
Now let’s dive in the new features of this release, and please beware this is still a development release!
All the driving messages produced by the server in the 0.8.X branch have been incorporated back as sensors to CARLA 0.9.1. This enables the client to detect collisions and determine lane changes. Collisions against the static layout of the scene and dynamic objects like vehicles are detected by the new
class CollisionSensor(object): """ Encapsulate sensor.other.collision messages """ def __init__(self, parent_actor): self._parent = parent_actor self._history = collections.defaultdict(int) bp = world.get_blueprint_library().find('sensor.other.collision') self._sensor = world.spawn_actor(bp, carla.Transform(), attach_to=self._parent) # We need to pass the lambda a weak reference to self to avoid circular # reference. weak_self = weakref.ref(self) self.sensor.listen(lambda event: CollisionSensor._on_collision(weak_self, event)) @staticmethod def _on_collision(weak_self, event): self = weak_self() if not self: return actor_type = ' '.join(event.other_actor.type_id.replace('_', '.').title().split('.')[1:]) self._hud.notification('Collision with %r' % actor_type) impulse = event.normal_impulse intensity = math.sqrt(impulse.x**2 + impulse.y**2 + impulse.z**2) self._history.append((event.frame_number, intensity)) ...
In this example we will receive a message each time the
parent_actor (a vehicle) crashes against other actor of the scene, indicating its type and the magnitude of the impact. This sensor also reports collisions against static elements of the map, such as traffic signs, walls, traffic lights, sidewalks, etc.
A new sensor
sensor.other.lane_detector can be now used to detect lane changes. New CARLA maps are more complex, containing multi-lane roads. In this context, it is important to detect when vehicles are changing lanes, determining which type of lanemarking the vehicle has crossed and what is the direction of the target lane. See code example below:
class LaneInvasionSensor(object): def __init__(self, parent_actor, hud): self.sensor = None self._parent = parent_actor self._hud = hud world = self._parent.get_world() bp = world.get_blueprint_library().find('sensor.other.lane_detector') self.sensor = world.spawn_actor(bp, carla.Transform(), attach_to=self._parent) # We need to pass the lambda a weak reference to self to avoid circular # reference. weak_self = weakref.ref(self) self.sensor.listen(lambda event: LaneInvasionSensor._on_invasion(weak_self, event)) @staticmethod def _on_invasion(weak_self, event): self = weak_self() if not self: return text = ['%r' % str(x).split()[-1] for x in set(event.crossed_lane_markings)] self._hud.notification('Crossed line %s' % ' and '.join(text))
It is possible now to get access to a high-level representation of the road network via the Map class. Map objects let us get recommended spawning points for vehicles (places where it is safe to instantiate new objects):
It also makes driving from client side very easy through the new waypoint querying API. In the code below we obtain the waypoint associated to the current location of an actor.
w = map.get_waypoint(location)
Users can also generate waypoints up to a given distance:
And overall, users can make use of these new functions to create their own navigation algorithms:
client = carla.Client(args.host, args.port) client.set_timeout(2.0) hud = HUD(args.width, args.height) world = World(client.get_world(), hud) world.vehicle.set_simulate_physics(False) m = world.world.get_map() w = m.get_waypoint(world.vehicle.get_location()) clock = pygame.time.Clock() count = 0 while True: clock.tick_busy_loop(60) world.tick(clock) world.render(display) pygame.display.flip() if count % 10 == 0: nexts = list(w.next(1.0)) print('Next(1.0) --> %d waypoints' % len(nexts)) if not nexts: raise RuntimeError("No more waypoints!") w = random.choice(nexts) text = "road id = %d, lane id = %d, transform = %s" print(text % (w.road_id, w.lane_id, w.transform)) if count % 40 == 0: draw_waypoints(world.world, w) count = 0 t = w.transform world.vehicle.set_transform(t) count += 1
Figure 1. Waypoints generated on-the-fly using the new Waypoint class
One of the most exciting additions of this release is the compatibility with new maps created with external tools. We want users to easily create their own maps in CARLA. Towards this end we adopted OpenDrive as our map definition standard. Using a well-known standard reduces the effort needed to create new contents compatible with the CARLA simulator.
We have been collaborating closely with VectorZero to guarantee full compatibility between RoadRunner and CARLA. RoadRunner is a tool that can produce complex driving maps in a few clicks thanks to its powerful procedural generation engine. What is best, it also produces the OpenDrive file associated to the 3D map, so maps produced by RoadRunner can be directly used in CARLA with all the functionalities available for the current towns.
VectorZero has commited to give free licenses to use RoadRunner for academic and research purposes to those academics who request it. So, check out their website to obtain a license and start building your own maps!
Figure 2. Example of RoadRunner scene
These functionalities will be added very soon. We will keep improving this API in the coming releases. If you find any issues or have suggestions that can be added, please do not hesitate to share it with the community at our GitHub or Discord chat. For the full list of methods available take a look at the Python API Reference.
Big thanks to all our supporters and sponsors for making this project a reality. And happy 1 year CARLA!
carla.Waypointclasses for querying info about the road layout
map.get_spawn_points()to retrieve the recommended spawn points for vehicles
map.get_waypoint(location)to query the nearest waypoint
map.generate_waypoints(distance)to generate waypoints all over the map at an approximated distance
map.get_topology()for getting a list with tuples of waypoints defining the edges of the road graph
waypoint.next(distance)to retrieve the list of the waypoints at a certain distance that can be reached from a given waypoint
parentattributes to actors, not None if the actor is attached to another actor
filterfunctionality and lazy initialization of actors
semantic_tagsto actors containing the list of tags of all of its components (these tags match the tags retrieved by the semantic segmentation sensor)
world.wait_for_tick()for blocking the current thread until a “tick” message is received
world.on_tick(callback)for executing a callback asynchronously each time a “tick” message is received. These methods return/pass a
carla.Timestampobject containing, frame count, delta time of last tick, global simulation time, and OS timestamp
actor.get_transform(), don’t need to connect with the simulator, which makes these calls quite cheap
carla.Vector3Dfor (x, y, z) objects that are not a
client.get_server_version(), which accomplishes the same purpose
actor.set_transform()broken for attached actors
-carla-settingsfails to load absolute paths (by @harlowja)