Detailed description of the topic:
One of the main drivers of research and development in the automotive industry is automated driving. In the short-term, the manufacturers are focusing on the commercialization of advanced driver assistance systems. In the long-run, the aim is to make available self-driving cars with SAE level 5 capabilities for a broad range of society. A key element in an automated vehicle is its motion control system, which enables the vehicle's movement in a well-predicted and accurate way in cooperation with an appropriate actuator and sensing system. This task can be challenging even for a professional driver if one of the vehicle tires reaches their saturation limit. See, for example, special motorsport driving techniques like drifting, trail braking, pendulum turn, etc. Moreover, rapidly changing conditions, like spotted ice road, aquaplaning, or heating up the tires, can be hard to handle. Classical control systems are usually based on models containing preliminary collected information about the controlled plant. Moreover, they can be augmented with different types of observers. However, several aspects are hard to handle by modeling, just to mention the tire characteristics with uneven temperature over the profile, expected behavior of vehicles (see, e.g., load changes, actuator dynamics, or battery state of charge variations). For such reasons, the use of learning algorithms in a standalone or combined way with classical control techniques could be advantageous. The goal of this Ph.D. topic to develop a reinforcement learning algorithm that can precisely handle the vehicle motion even at the handling limits and rapidly changing conditions and that could provide the maximal flexibility of vehicle motion for avoiding accidents.