Recently there has been a trend toward increasing the capability of the in-vehicle interface in terms of access to information and complex controls. This has been accompanied by an increase in the complexity of the car Human Machine Interface [HMI], At the same time, studies have shown that driver distraction can contribute to accidents. This paper provides some possible ways to reduce driver cognitive load by augmenting the interface. We use prediction of the driver's next action or intention in order to provide UI affordances for more quickly selecting actions. Two examples of this are presented: prediction of driver interaction with the car HMI based on the driving history, and prediction of driver intention from the driver speech. In the first example, we used signal processing techniques to extract meaningful features from vehicle CAN and history data, and then we used machine learning techniques to predict the driver's next action. In the second example, we used ASR and natural language processing to extract text features from driver speech, and predict user intention using a neural network and word embedding. The proposed prediction methods for user actions and intentions can be used to improve in-vehicle task performance.