In the fast-paced world of autonomous vehicle (AV) development, the CARLA Autonomous Driving Simulator has quickly become a go-to resource for developers and researchers alike. Why? Because it focuses on providing easy-to-use APIs, integration with other technologies, and is highly customizable. These features make CARLA an essential tool for anyone pushing the boundaries of AV tech. We are expanding these capabilities with the NVIDIA Omniverse Cloud APIs announced today at NVIDIA GTC, a global AI conference.
First off, CARLA is user-friendly. You don’t need to spend days trying to figure out how to make it work; it’s designed to get you up and running quickly. It’s also flexible - whatever your project, chances are, CARLA can adapt to fit its needs. Want to tweak scenarios or hook it up with external software? CARLA can do it.
Where CARLA shines is in helping developers through the entire AV development cycle. Whether it’s optimizing where to place sensors on your vehicle, designing the brains of the operation (aka perception and planning algorithms), or making sure the whole system can handle real-life scenarios, CARLA is there to make the journey smoother and faster. By simulating real-world conditions in a virtual environment, developers can iterate and refine their tech without the constant need for costly and time-consuming real-world tests.
A big part of what makes CARLA so valuable is its ability to mimic the sensors that AVs rely on. With a rich set of models for cameras, LIDAR, RADAR, and IMUs right out of the box, developers have a solid foundation for building systems that can navigate the complexities of real roads. This sensor simulation is key for developing algorithms that can accurately understand and respond to dynamic driving conditions.
As the AV field grows, so does the need for more sophisticated sensor models. Enter NVIDIA Omniverse Cloud APIs, which open up new possibilities for making CARLA simulations even more realistic. These new APIs allow for the integration of cutting-edge sensors into the CARLA simulator, enhancing the accuracy of simulations and narrowing the gap between the virtual and real worlds.
The beauty of CARLA’s evolution is how it’s become a community affair. With Omniverse Cloud APIs, third-party developers can now add their sensor models to the mix, enriching the CARLA ecosystem. This collaborative approach not only speeds up innovation but also empowers everyone involved to contribute to the advancement of AV technology.
With NVIDIA Omniverse Cloud APIs, CARLA has become more than just a simulator; it’s also a powerhouse for accelerating AV development. It’s about making the development process more efficient, reducing the reliance on physical prototypes, and, ultimately, crafting the future of autonomous driving. As CARLA continues to evolve, it’s set to remain at the heart of the AV development community, driving innovation and collaboration.
]]>Ever wanted to set up complex CARLA simulations and scenarios without writing a line of code? - This is the power of Synkrotron’s OASIS Simulation Platform.
OASIS Sim facilitates the development and execution of complex scenarios through an intuitive, web-based, graphical user interface. The vast configurability of the CARLA simulator is exposed through OASIS Sim’s multiple views for Scenario Authoring, Vehicle Configuration, Diagnosis and Cloud Job Management. The simulation workload can be distributed and parallelised on the cloud, leveraging the power of distributed cloud computing such as Amazon Web Services with the latest GPU technology. In the following, we introduce you to OASIS Sim’s principal views to explain how they can help you set up and run your simulations! Please visit the Synkrotron website and request a trial.
OASIS Sim provides a Scenario Editor GUI that enables users to design and edit traffic scenarios visually and intuitively. Users can specify the environment condition, the driving task of the ego vehicle and the behaviors of other traffic participants. The definition of these elements is compatible with OpenScenario 1.0 and can be imported from external sources or exported. Background traffic can also be managed in this view to add diversity and complexity to each scenario.
In the Vehicle Configuration panel users can equip the ego vehicle with pre-defined sensor types, a vehicle dynamics model and an autonomous driving system. This view offers visualizations of sensor placement and fields of view, allowing comprehensive configuration of the sensor suite and behaviors of the ego vehicle.
OASIS Sim works on the cloud, leveraging the scalability of cloud computing and the latest GPU technology to accelerate your simulation workflow. Each simulation is managed as a cloud processing job in the task management view. For each job you can see progress status and key outcome indicators such as collisions or traffic infractions. Multiple jobs will be parallelised using cloud computing resources to accelerate the R&D workflow.
Completed simulations can be analyzed in detail through the Diagnosis view to scrutinize the performance of an autonomous driving system under test both visually and through log data. Sensor feeds such as cameras and LIDAR can be replayed directly, performance logs can be viewed and downloaded and telemetry details such as speed, acceleration and inertial motion can be viewed graphically. The Diagnosis view gives you everything you need to troubleshoot your system through detailed metrics and visual analysis.
Both cloud and local deployments of Oasis Sim are available through containers. A comprehensive API exposes OASIS Sim’s features for seamless DevOps integration in your R&D workflow. Please visit the Synkrotron website and request a trial.
This is the first of a series of articles covering different tools within the CARLA Ecosystem. CARLA provides integration with numerous tools from the community, partners and sponsors to augment its capabilities and address a wide array of simulation use cases.
]]>We are pleased to present to you the latest version of CARLA, version 0.9.15! This version brings SimReady content import to CARLA through NVIDIA’s Omniverse platform, 2 new maps, a procedural map generation tool and a procedural building generation tool to accelerate and enhance your CARLA content creation process.
CARLA is now integrated with NVIDIA’s Omniverse content creation platform to support SimReady content import into CARLA with just a few clicks. CARLA users can now import assets directly into CARLA through the NVIDIA Omniverse Unreal Engine plugin.
0.9.15 introduces two new maps: Town 13, a new Large Map of 10x10 km2 and Town 15, a new map based on the campus of the Universitat Autònoma de Barcelona (UAB). Town 13 shares its scale and some of its features with Town 12. However, the styles of many of its details such as road surfaces, buildings and vegetation are very distinct from those of Town 12. Therefore Town 13 serves as an ideal companion to Town 12, completing a powerful train-validation pair to expose overfitting issues! Town 15 sports a campus-like road layout, with plentiful mini-roundabouts and traffic calming measures, along with minimalist modern buildings like those seen in many European universities.
This release introduces some incredible new procedural map generation tools to accelerate your environment generation workflow. The Digital Twin Tool enables a 3D map to be procedurally generated from a digital representation of real-world road networks derived from OpenStreetMap data, fully populated with roads, buildings and vegetation. The procedural building tool can be used for creating 3D models based on a selection of fascia building blocks and tweekable parameters, to create infinite variations on building styles.
A new heavy goods vehicle is now included to add diversity to CARLA’s vehicle library. The vehicle is the tractor part of an HGV in the cab-over-engine style seen primarily on European roads.
CARLA’s asset library now has a catalogue to help you find the assets you require for your simulation. Browse blueprints with pictures and choose your props and vehicles with ease!
We hope you enjoy using CARLA’s latest features!
NVIDIA’s SimReady specification supports the preparation of 3D content that is purpose-built for simulation to help streamline content creation pipelines used in simulating 3D virtual environments for machine learning purposes in robotics and autonomous driving. Through the Omniverse Unreal Engine plugin, now integrated into CARLA, users can import, in just a few clicks, SimReady content such as vehicles already configured with working lights, doors and wheels and props ready for use instantaneously to decorate CARLA maps. CARLA’s Omniverse integration boasts to significantly accelerate your 3D environment building pipeline and opens the door to a whole world of applications in the Omniverse ecosystem.
Town 13 shares many similarities with Town 12. It is 100 square kilometers in size and has an extensive road network, a high-rise downtown area, residential and rural areas, vegetation and water features. However, Town 13’s road network has some unique details that differ from those of Town 12. To add to this, the architectural styles of each area of the city are quite distinct from the corresponding areas of Town 12. Town 13 is also decorated with new styles of vegetation and foliage. This makes Town 13 an ideal companion to Town 12 as part of a train-validation pair. Deriving training data using one of the pair and then running validation in the other is a powerful method for exposing overfitting issues in your AD stack.
Town 15 is a standard map based on the road layout and some emblematic buildings from the Universitat Autònoma de Barcelona (UAB) campus. The road layout includes many roundabouts and roadside parking spots and also has some areas of steep elevation. The map also features numerous minimalist, modern buildings styled after those of the campus of the Universitat Autònoma de Barcelona, including the Computer Vision Center and the humanities library. This is an ideal map to train and test in environments similar to campuses or industrial estates, with slow moving traffic and traffic calming measures.
CARLA 0.9.15 introduces two new experimental features for procedural generation of new maps and buildings. These tools will help to accelerate map generation and add diversity to custom built CARLA maps.
The digital twin tool generates unique CARLA maps based on areas of road network derived from OpenStreetMap data. Users can download an area of OSM data as input for the tool and browse the map using the tool’s interface. When an area of interest is chosen, the tool will extract the roads then decorate them with realistic road surfaces, generating 3D buildings and vegetation to fill the spaces between the roads. The result is a unique CARLA map with a road network representing a digital twin of a real-world road network. The buildings are generated matching the footprint and height data extracted from OSM, so the buildings in the digital twin proportionally match the real buildings in the chosen map region. Buildings are clad with mesh pieces in a variety of styles drawn from the CARLA asset library to create visual diversity.
The procedural building tool gives CARLA users the capability to create new buildings based on a library of building block mesh pieces and parameters to control the dimensions and features of the building.
Various decorations can be added to the buildings such as window boxes, lintels, windowsills, guttering, sun shades, blinds and antennas to create subtle or large variations.
CARLA’s array of vehicles for simulation has grown with the addition of a Cab-Over-Engine style heavy goods vehicle tractor, of the style commonly used throughout Europe for commercial transportation.
CARLA’s principal assets - maps, vehicles, pedestrians and props now have a catalogue to help find the item that best suits your needs. Browse visually through maps, vehicles or props to find the assets you want to use in your simulation and copy the Blueprint ID right out of the catalogue. Check the CARLA catalogue out here. The catalogue also includes useful navigator tools for Towns 12 and 13, which can help you make your way around these two Large Maps with ease.
Other fixes and improvements:
FPixelReader::SavePixelsToDisk(PixelData, FilePath)
function to crash due to pixel array not set correctlycarla.TrafficManager
Python API functionsWe at the CARLA organization are thrilled and inspired by the enthusiastic reception of the CARLA Autonomous Driving Challenge 2023!. This is an exciting opportunity for innovators to test their autonomous driving systems in new challenging situations.
Here’s what you need to know:
New Leaderboard 2.0 Platform: Taking the challenge to a new level, the competition now includes new regions, new routes, and a more extensive set of traffic scenarios. Get the details on our official website: https://leaderboard.carla.org/
Submit to Leaderboard 2.0 Tracks: We have developed new tools to ease the data acquisition process for the 2.0 routes. Time to step up the game and submit your solutions to these new tracks! More information on how to get started is available here: https://leaderboard.carla.org/get_started/.
Important Update: Leaderboard 1.0 will NOT accept new submissions during the 2023 challenge.
Event Announcement: The results of the CARLA Autonomous Driving Challenge 2023 will be revealed in an independent online event in early December, hosted by the Embodied AI Foundation and CARLA’s sponsors. As always, we will invite teams behind the top submissions to participate.
Please note that, unlike previous editions, the results won’t be presented at NeurIPS WS. For a summary of these exciting changes, don’t miss our new video instructions.
This year’s challenge promises to be bigger and bolder than ever! Get involved, innovate, and help shape the future of autonomous driving. Your adventure begins here!.
]]>The CARLA organisation is delighted to announce today that Synkrotron has joined the CARLA consortium as a valued sponsor. Synkrotron’s CTO, Dr. Yuxi Pan, will join the CARLA consortium board to represent the company, alongside existing representatives from Intel, Toyota Research Institute, Futurewei and NVIDIA.
Synkrotron (formerly GuardStrike) offers powerful solutions for accelerating autonomous driving R&D. Their software products and solutions unify multiple aspects of the autonomous driving R&D pipeline, streamlining the collection, exchange and ingestion of data from real-world traffic scenarios, 3D assets, environments and maps into their world-class simulation platform, built on top of the CARLA simulator.
Synkrotron’s Oasis simulation platform enables the rapid creation of traffic scenarios through a user-friendly graphical scenario editor and an accompanying automated scenario generation toolbox. These functionalities help provide coverage for safety-critical scenarios with a minimum of work. To improve the fidelity of the simulation, the platform adds physically realistic and AI-based sensor models as well as data-driven traffic simulations. The entire platform can be scaled with concurrency through the cloud infrastructure.
Synkrotron deliver their solutions to OEMs and academic research laboratories worldwide, giving them a deep understanding of the industry’s simulation requirements. Their industry knowledge will serve as a considerable addition to the CARLA consortium, expertly guiding the CARLA roadmap as the industry evolves.
Dr. James Yang, founder and CEO of Synkrotron, states: “We believe a partnership between CARLA and Synkrotron will not only benefit CARLA’s user community but also the industry as a whole, serving customers who need technical support and customized simulation solutions. Besides adding new features that are open-sourced, our team is committed to provide valuable services and advanced features to augment CARLA’s functionality.”
Synkrotron have been a valued contributor to the CARLA ecosystem for several years, sponsoring the CARLA Leaderboard and making a significant code contribution to both the Leaderboard and CARLA repositories. Their addition to the consortium will be an ongoing benefit to the CARLA ecosystem.
We are pleased to welcome Synkrotron into the CARLA consortium and look forward to the future of the partnership!
]]>CARLA is being widely used for the virtual testing of autonomous road vehicles with realistic environment simulation. But have you ever tried to use it for off-road applications such as agriculture, construction, or logistics?
AVL engineers have found a solution to enhance CARLA for simulation in unpaved, rough terrain. This has been achieved by coupling CARLA with the advanced vehicle dynamics software AVL VSMTM. With the AVL solution, the vehicle dynamics precisely interact with the surface based on a synchronization of all wheel contact positions. In combination with AVL’s soft-soil tire model, users can expect realistic driving behavior.
The two simulation tools are coupled using AVLs open co-simulation platform Model.CONNECTTM. With plug & play interfaces it is easy to connect the signal I/O ports of the tools with each other in a graphical user interface. The powerful co-simulation engine handles the correct synchronization of all tools and can even handle tools running with different frequencies.
In the concrete example of the autonomous tractor, the co-simulation setup was extended with a sensor fusion algorithm that outputs a unified list of detected objects as well as the automated driving control algorithm. The control loop is closed by updating the new vehicle position in CARLA and generating new sensor data.
AVL agreed to contribute their CARLA code changes and technical documentation to the CARLA project. Furthermore, AVL tools are available free of charge via the University Partnership Program. Stay tuned and contact us if you are interested at info@avl.com.
]]>The 0.9.14 release of CARLA has landed and we think you’ll be just as excited as we are about it!
At CARLA, we are scaling things up! The latest version of CARLA brings a brand new Large Map, with an unprecedented scale and level of detail. Town 12 is 10x10 km2 and boasts a diverse range of environments from urban high-rise to rural corn fields. You’ll be amazed by the size and detail!
Continuing with the theme of scaling up, 0.9.14 now supports multiple GPUs for setting up high-performance CARLA workstations.
CARLA 0.9.14 brings further diversity and realism to your traffic simulations with a brand new public transit vehicle - the Mitsubishi Fuso Rosa bus.
This version of CARLA includes new semantic classes to further differentiate vehicle types. Buses, trucks, bicycles, motorcycles and riders now have their own semantic IDs and colors matching the Cityscapes color pallette.
N-wheeled vehicles are now supported by CARLA’s engine, enabling users to import models of heavy goods and industrial vehicles with 3 or more axles.
And last but not least, version 0.9.14 comes with numerous fixes and improvements. CARLA now has an Ackermann control built into the API, vehicle objects now have additional attributes to help filter and organise them and the Traffic Manager has new functions to offset the vehicle from the lane center.
We hope you enjoy reading about CARLA’s latest features!
Town 12 is the newest addition to the CARLA asset library. Leveraging the Large Map functionality introduced in CARLA version 0.9.12, Town 12 boasts a diverse range of highly detailed environments including the following regions:
High-rise central business district:
Town 12’s central business district is a large area of high rise skyscrapers arranged into blocks on a consistent grid of roads, resembling downtown areas in many large American and European cities.
High density residential:
The high density residential areas of Town 12 have many 2-10 storey apartment buildings with commercial properties like cafes and retail stores at street level.
Low density residential:
The low density residential regions of Town 12 reflect the classic suburbs of many American cities, with one and two story homes surrounded by fenced gardens and garages.
Highways and intersections:
Town 12 has an extensive highway system, including 3-4 lane highways interspersed with impressive roundabout junctions.
Rural and farmland:
Town 12 also has rural regions with characteristic farmland buildings like wooden barns and farmhouses, windmills, grain silos, corn fields, hay bails and rural fencing.
The CARLA simulator can now be distributed across multiple GPU devices. Multiple synchronized instances of the simulator can be run on different GPUs, with the sensor workload distributed evenly over the graphics cards. High performance CARLA workstations can be built using multi-GPU server hardware.
CARLA’s semantic classes are now fully compatible with the Cityscapes ontology. We have included 5 new classes for extra ground truth fidelity, assisting in the classification of different types of vehicle. The semantic class list now includes separate classes for cars, trucks and buses, along with new classes for motorcycles, bicycles and their riders.
The semantic class list has 5 new members and associated colors:
The CARLA garage presents a brand new public transit vehicle. The Mitsubishi Fuso Rosa is a widely used 20-25 seat bus, used by both private travel companies and public transport authorities around the world with tens of thousands produced every year.
CARLA now supports vehicles with more than four wheels. Users can now develop models of heavy goods and industrial vehicles with 3 or more axles and import them into CARLA.
CARLA camera sensors can now simulate the distorting effects of rain and dust contamination of the lens, adding an extra level of realism to your training data and presenting challenges for testing your AD stacks with imperfect data.
As with every CARLA release, we continue our efforts to improve the workflow and fix bugs. The following are some key improvements and fixes:
New vehicle attributes:
Vehicle blueprints now have new attributes to help organize and filter them better:
base_type
: can be use as a vehicle classification. The possible values are car, truck, van, motorcycle and bicycle.special_type
: provides more information about the vehicle. It is currently restricted to electric, emergency and taxi, and not all vehicles have this attribute filled.has_dynamic_doors
: can either be true or false depending on whether or not the vehicle has doors that can be opened using the API.has_lights
: works in the same way as has_dynamic_doors, but differentiates between vehicles with lights, and those that don’t.Native Ackermann controller:
The CARLA API now has methods for applying Ackermann controls to a vehicle:
apply_ackermann_control
: to apply an Ackermann control command to a vehicleget_ackermann_controller_settings
: to get the last Ackermann controller settings appliedapply_ackermann_controller_settings
: to apply new Ackermann controller settingsNew Traffic Manager methods:
The Traffic Manager has new methods to offset the vehicle from the lane center:
vehicle_lane_offset(actor, offset)
global_lane_offset(offset)
Other fixes and improvements:
Vehicle.get_traffic_light_state()
and Vehicle.is_at_traffic_light()
causing vehicles to temporarily not lose the information of a traffic light if they moved away from it before it turned green.Vehicle.get_traffic_light_state()
function not notify about the green to yellow and yellow to red light state changes.Vehicle.is_at_traffic_light()
function to return false if the traffic light was green.world.ground_projection()
function to return an incorrect location at large maps.Vehicle.get_failure_state()
. Only Rollover failure state is currently supported.Map.get_topology()
, causing lanes with no successors to not be part of it.set_desired_speed
, to set a vehicle’s speed.NormalsSensor
, a new sensor with normals informationset_day_night_cycle
at the LightManager, to (de)activate the automatic switch of the lights when the simulation changes from day to night mode, and viceversa.listen_to_gbuffer
: to set a callback for a specific GBuffer textureThe CARLA team is delighted to announce that our Leaderboard partners, Guardstrike, have released the CARLA-Apollo bridge.
Apollo is an open-source L4 autonomous driving software stack used by many OEMs, solution vendors, developers and researchers. The CARLA-Apollo bridge connects the two popular open-source software packages, enabling Apollo software stacks to drive the CARLA simulator and receive, assimilate, interpret and visualize data through the extensively featured Apollo interface.
This promises great new potential for both user communities, connecting a high performance, flexible platform for accelerating development and testing of AD software stacks with the industry’s most widely used open-source simulator.
The CARLA Team
]]>NVIDIA’s participation in the consortium will provide many benefits to the CARLA community. Connecting the CARLA simulator to NVIDIA Omniverse , a platform for integrating and building custom 3D pipelines, will provide rich new sources of content such as vehicles, pedestrians, and props for simulation in CARLA.
Creating and accessing high-quality, diverse content for realistic, 3D virtual worlds used in autonomous driving simulation can be an arduous task. By joining forces and supporting a common specification for simulation-ready assets, artists and content providers can build assets that are ready to use without the usual days or weeks of work needed.
Over the past six years, the CARLA team has made a huge effort to provide as many open and diverse 3D assets as possible, releasing 11 maps and thousands of 3D assets, including materials and texture maps. However, autonomous vehicles need training data to adapt to a highly disparate range of real-world environments. This motivates continuous growth of the scale and diversity of our content. The connection of CARLA to NVIDIA Omniverse will enable users to tap into a larger ecosystem of content and content-creation software, augmenting CARLA’s existing library.
NVIDIA Omniverse is designed to facilitate the interchange of assets between content-creation tools. Omniverse is built on the open-source Universal Scene Description (USD) format. Incorporating support for USD will streamline the ingestion of newly created assets into CARLA, leveraging workflows that take advantage of the multitude of connected Omniverse applications, which includes industry-standard software such as Blender, Autodesk Maya and 3ds Max.
Fidler, who is also an associate professor in the Department of Computer Science at the University of Toronto, is a strong advocate of CARLA and has cited the consortium’s work in several published papers and articles on AI and reinforcement learning. In her role on the CARLA board, she will share her expertise in AI and research as the consortium plans future developments.
We are thrilled to collaborate with NVIDIA to bring a large collection of high-quality and diverse content for the benefit of the CARLA community. And this is just the beginning. Stay tuned for more updates!
The CARLA Team
]]>The 0.9.13 release of CARLA is finally here, so read on to see how the new improvements will revolutionize your workflow.
CARLA now features ground truth instance segmentation! A new camera sensor can be used to output images with the semantic tag and a unique ID for each world object. Now the ground truth ID is available to differentiate objects of the same type in the camera field of view. This functionality can be used for many different purposes such as training and evaluating networks to differentiate and track overlapping objects.
Runtime texture streaming is now possible through the CARLA API. This new functionality enables the user to change the textures of every object in the scene during runtime without requiring the Unreal Editor. Produce high quality training data to improve generalization, avoid overfitting pitfalls and challenge neural networks with adversarial attacks through continuously updating textures.
CARLA’s API now provides new functions to retrieve ground truth pedestrian 3D bone positions and hence evaluate the pedestrian’s 3D pose, facilitating the validation and training of pose estimation models and also adding custom gestures to standard animations through the API.
CARLA 0.9.13 presents new assets to enrich your simulations with a selection of additional pedestrians and vehicles. 4 new pedestrians are introduced including 2 adults (each with 3 variants) and 2 children to add extra variety to the humans occupying the pavements and roads in your simulations. 2 new vehicles have also been added to the CARLA garage, the Volkswagen T2 2021 and a new Ford Crown Taxi. Generation 2 (GEN-2) cars now include articulated doors that can interact with the environment through collisions. Articulating doors are especially useful for training and testing autonomous vehicles in car parks or on tight city streets.
This release comes with major improvements to the Traffic Manager, with new logic to promote more realistic vehicle behavior at intersections and at high speeds. The API now offers new functionality to guide vehicles with a user defined route or path and vehicle lights now react to weather conditions and junctions. Vehicle physics has been enhanced with more accurate models of acceleration, gear and braking behavior.
CARLA now has a brand new instance segmentation sensor that outputs images containing a different ID for every object. Instance semantic IDs are now available embedded in the G and B channels of the RGB output of the sensor data, alongside the standard semantic IDs in the R channel. The new functionality provides access to the ground truth identity of objects traversing the sensor field of view, enabling training, improvement, validation and evaluation of neural networks. This new functionality brings a whole new world of possibilities for training machine learning algorithms to differentiate overlapping objects in scenes.
The API now exposes functionality to update textures during runtime, allowing texture modification without relying on the UE4 editor. Users can now update and modify textures programmatically through the API, allowing texture updating to occur, for example, during the execution of a neural network training script. This can be useful, for example, when training and validating neural networks to avoid overfitting pitfalls. Continuously updating textures can be used to challenge neural networks with adversarial attacks.
# Get names of all available objects
object_names = world.get_names_of_all_objects()
for name in object_names:
print(name)
# Choose an object to modify
# For example target_object could be 'SM_Cartel_Add_5'
target_object = random.choice(object_names)
print('Altering texture for object: ' + target_object)
# Modify its texture
texture = carla.TextureColor(width,height)
for x in range(0,len(image[0])):
for y in range(0,len(image)):
color = image[y][x]
r = int(color[0])
g = int(color[1])
b = int(color[2])
a = int(color[3])
texture.set(x, height - y - 1, carla.Color(r,g,b,a))
world.apply_color_texture_to_object(target_object, carla.MaterialParameter.Diffuse, texture, 0)
The CARLA API now provides functionality to retrieve the ground truth 3D bone positions of pedestrians in the simulation. This opens up a wealth of possibilities for training and evaluation of human pose estimation models. A CARLA vehicle can now record the bone positions of nearby pedestrians to complement its sensor data with access to the ground truth 3D pose for comparison with model-estimated poses. This new functionality also allows easy augmentation of pedestrian behavior with custom gestures blended into the default animations like waving or pointing.
The following animation shows a custom animation handled by the CARLA API retrieving the bones then blending between the neutral pose and a custom walking animation handled through the API:
The following code spawns two pedestrians from the blueprint library and then retrieves the 3D skeletons and alters the arms’ pose:
# create several pedestrians
blueprints = world.get_blueprint_library().filter("walker.pedestrian.*")
pedestrians = []
pedestrians.append(world.spawn_actor(random.choice(blueprints), world.get_random_location_from_navigation()))
pedestrians.append(world.spawn_actor(random.choice(blueprints), world.get_random_location_from_navigation()))
...
# get the 3d bones from all pedestrians
for ped in pedestrians:
bones = ped.get_bones()
# modify some bones
new_pose = []
for bone in bones.bone_transforms:
if bone.name == "crl_foreArm__L":
bone.relative.rotation.pitch -= 90
new_pose.append((bone.name, bone.relative))
elif bone.name == "crl_foreArm__R":
bone.relative.rotation.pitch -= 90
new_pose.append((bone.name, bone.relative))
# set the new pose
control = carla.WalkerBoneControlIn()
control.bone_transforms = new_pose
ped.set_bones(control)
# blend the pose
actor.blend_pose(0.5)
4 new pedestrians have been added to the CARLA assets. 2 adults (each with 3 variants) and 2 children.
The CARLA garage has built two exciting new vehicles, a high fidelity version of the Volkswagen T2 2021 and the Ford Crown Taxi. Add some charm to your city traffic with these beautiful new vehicles.
All Generation 2 (GEN-2) vehicles now have articulated doors that can be opened and closed through functions in the API. This functionality is ideal for simulating scenarios in busy city streets and car parks where doors may open unpredictably while driving past or might be blocking access to parking spaces or passageways.
The Traffic Manager has undergone a major update with enhanced logic for handling vehicle behavior. Traffic now behaves in a more realistic manner at intersections with vehicles driving fluidly at high speeds. Vehicle turning is now smoother under highway conditions. Collision detection and braking is enhanced to promote more realistic rapid traffic behavior. Altogether the Traffic Manager improvements in this CARLA update yield far more realistic NPC traffic.
The Traffic Manager now also includes some great new functionality to augment user control over the behavior of vehicles. The API exposes new Traffic Manager functions that can be used to guide vehicles with a user-defined custom path.
New functions:
get_next_action
: gets the next action that a vehicle will takeget_all_actions
: enables the user to query all possible actions a vehicle has availableset_path
: allows a custom route to be defined using coordinates defined as CARLA Locationsset_route
: allows a custom route to be defined using route commands like left
, right
or straight
path = [carla.Location(x=-506.696198, y=179.384308, z=0.038194),
carla.Location(x=-504.745972, y=232.868927, z=0.039417)]
traffic_manager.set_path(vehicle, path)
The set_route
function allows a custom route to be defined using route commands:
route = ["Right", "Straight", "Right", "Right", "Left", "Right"]
traffic_manager.set_route(vehicle, route)
New functionality grants the Traffic Manager control over vehicle lights. Headlights, fog lights and blinkers are now controlled according to vehicle route data and weather conditions. Blinkers activate according to the next planned turn in the route, braking lights will activate when vehicle brakes are applied. Fog lights and main beams are activated according to the weather and light conditions. During night, vehicles’ main beams are activated and in foggy conditions rear fog lights are activated.
Vehicle physics have been improved with better modelled acceleration, braking and gear performance. Vehicle behavior has been carefully remodelled to better capture the real characteristics of the CARLA garage vehicles. Much care has been taken to reproduce published performance data on acceleration and braking as accurately as possible. Similarly the gear change behavior has been altered to better reflect real world driving.
Here we want to acknowledge all the contributors who committed work to CARLA 0.9.13. Thank you all for your hard work!
manual_control_steeringwheel.py
script