Neural network has shown its powerful ability in many research fields in the recent years. By using different network structures, many new algorithms are developed to enhance the accuracy. Along with the algorithm development, corresponding architectures are also proposed for the acceleration. However, pure algorithm may not be hardware friendly. As a result, we need to find an optimal trade-off between algorithmic accuracy and architectural efficiency. To help students build the gap between algorithm and architecture, this paper introduces a project-based learning. The project is called learned image compression, which is composed of three phases: algorithm design, architecture mapping and algorithm-architecture co-optimization. Through the project, the students are expected to develop a neural network with high image compression ratio and hardware performance. Furthermore, these kind of knowledge can be extended to any neural network applications.