Although exists several drivers' gaze direction classifiers to prevent traffic accidents caused by inattentive driving, making this classification while the driver's face is temporarily or permanently occluded remains exceptionally challenging. For example, drivers using masks, sunglasses, or scarves and daily light variations are non-ideal conditions that recurrently appear in an everyday driving scenario and are frequently overlooked by the existing classifiers. This paper presents a single camera framework gaze zone classifier that operates robustly even during non-uniform lighting, non-frontal face pose, and faces undergo temporal or permanent occlusions. The usage of a normalized dense aligned face pose vector, the classification result of a pre-processed right eye area pixels, and the classification result of a pre-processed left eye area pixels is the cornerstone of the feature vector used in our model. The key of this paper is double-folded: firstly, the usage of a normalized dense alignment for a robust face, landmark, and head-pose direction detection and secondly, the processing of the right and left eye images using computer vision and deep learning techniques for refining, modifying, and finally labeling eyes information. Experiments on a challenging dataset involving non-uniform lighting, non-frontal face pose, and faces with temporal or permanent occlusions show each feature's importance towards making a robust gaze zone classifier under unconstrained driving situations.