We propose a decentralized system to determine where ride-sharing vehicle agents should wait for passengers using multi-agent deep reinforcement learning. Although numerous drivers have begun participating in ride-sharing services as the demand for these services has increased, much of their time is idle. The result is not only inefficiency but also wasted energy and increased traffic congestion in metropolitan area, while also causing a shortage of ride-sharing vehicles in the surrounding areas. We therefore developed the distributed service area adaptation method for ride sharing (dSAAMS) to decide the areas where each agent should wait for passengers through deep reinforcement learning based on the networks of individual agents and the demand prediction data provided by an external system. We evaluated the performance and characteristics of our proposed method in a simulated environment with varied demand occurrence patterns and by using actual data obtained in the Manhattan area. We compare the performance of our method to that of other conventional methods and the centralized version of the dSAAMS. Our experiments indicate that by using the dSAAMS, agents individually wait and move more effectively around their service territory, provide better quality service, and exhibit better performance in dynamically changing environments than when using the comparison methods.