Cow behaviour provides valuable information about animal welfare, activities and livestock production. Therefore, monitoring of behaviour is gaining importance in the improvement of animal health, fertility and production yield. However, recognizing or classifying different behaviours with high accuracy is challenging, because of the high similarity of movements among these behaviours. In this study, we propose a deep learning framework to monitor and classify dairy behaviours, which is intelligently combined with C3D (Convolutional 3D) network and ConvLSTM (Convolutional Long Short-Term Memory) to classify the five common behaviours included feeding, exploring, grooming, walking and standing. For this, 3D CNN features were firstly extracted from video frames using C3D network; then ConvLSTM was applied to further extract spatio-temporal features, and the final obtained features were fed to a softmax layer for behaviour classification. The proposed approach using 30-frame video length achieved 90.32% and 86.67% classification accuracy on calf and cow datasets respectively, which outperformed the state-of-the-art methods including Inception-V3, SimpleRNN, LSTM, BiLSTM and C3D. Additionally, the influence of video length on behaviour classification was also investigated. It was found that increasing video sequence length to 30-frames enhanced classification performance. Extensive experiments show that combining C3D and ConvLSTM together can improve video-based behaviour classification accuracy noticeably using spatial–temporal features, which enables automated behaviour classification for precision livestock farming.
ASJC Scopus subject areas
- コンピュータ サイエンスの応用