Immersive sound spaces which are synthesized using virtual reality techniques have recently been developed to realize highly realistic telecommunication interactions. In real world soundscapes the sounds we usually listen to include background or "ambient" sounds such as the sounds of the ventilation system in a room. However, current virtual auditory display systems generate point sound sources which are often attributed to specific object locations. Therefore, it is possible to reproduce sound direction, but no information about the sound space is included. As a result, the sound output is often dry and unnatural. In this research, a rendering method for ambient sounds and its effects are investigated. An optimum rendering algorithm of ambient sounds is proposed and its effects on the quality of sound space are examined.