The Phasebook: Building Complex Masks via Discrete Representations for Source Separation

Jonathan Le Roux, Gordon Wichern, Shinji Watanabe, Andy Sarroff, John R. Hershey

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep learning based speech enhancement and source separation systems have recently reached unprecedented levels of quality, to the point that performance is reaching a new ceiling. Most systems rely on estimating the magnitude of a target source, either directly or by computing a real-valued mask to be applied to a time-frequency representation of the mixture signal. A limiting factor in such approaches is a lack of phase estimation: the phase of the mixture is most often used when reconstructing the estimated time-domain signal. We propose to estimate phase using »phasebook», a new type of layer based on a discrete representation of the phase difference between the mixture and the target. We also introduce »combook», a similar type of layer that directly estimates a complex mask. We present various training and inference schemes involving these representations, and explain in particular how to include them in an end-to-end learning framework. We also present an oracle study to assess upper bounds on performance for various types of masks using discrete phase representations. We evaluate the proposed methods on the wsj0-2mix dataset, a well-studied corpus for single-channel speaker-independent speaker separation, matching the performance of state-of-the-art mask-based approaches without requiring additional phase reconstruction steps.

Original languageEnglish
Title of host publication2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages66-70
Number of pages5
ISBN (Electronic)9781479981311
DOIs
Publication statusPublished - 2019 May 1
Event44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
Duration: 2019 May 122019 May 17

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2019-May
ISSN (Print)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
CountryUnited Kingdom
CityBrighton
Period19/5/1219/5/17

Fingerprint

Source separation
Masks
Speech enhancement
Ceilings

Keywords

  • deep learning
  • discrete representation
  • mask inference
  • phase estimation
  • source separation

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Cite this

Roux, J. L., Wichern, G., Watanabe, S., Sarroff, A., & Hershey, J. R. (2019). The Phasebook: Building Complex Masks via Discrete Representations for Source Separation. In 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings (pp. 66-70). [8682587] (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; Vol. 2019-May). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICASSP.2019.8682587

The Phasebook : Building Complex Masks via Discrete Representations for Source Separation. / Roux, Jonathan Le; Wichern, Gordon; Watanabe, Shinji; Sarroff, Andy; Hershey, John R.

2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 66-70 8682587 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; Vol. 2019-May).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Roux, JL, Wichern, G, Watanabe, S, Sarroff, A & Hershey, JR 2019, The Phasebook: Building Complex Masks via Discrete Representations for Source Separation. in 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings., 8682587, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2019-May, Institute of Electrical and Electronics Engineers Inc., pp. 66-70, 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019, Brighton, United Kingdom, 19/5/12. https://doi.org/10.1109/ICASSP.2019.8682587
Roux JL, Wichern G, Watanabe S, Sarroff A, Hershey JR. The Phasebook: Building Complex Masks via Discrete Representations for Source Separation. In 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. p. 66-70. 8682587. (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings). https://doi.org/10.1109/ICASSP.2019.8682587
Roux, Jonathan Le ; Wichern, Gordon ; Watanabe, Shinji ; Sarroff, Andy ; Hershey, John R. / The Phasebook : Building Complex Masks via Discrete Representations for Source Separation. 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 66-70 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).
@inproceedings{9f07f29e4a994ec6bd059911b1062392,
title = "The Phasebook: Building Complex Masks via Discrete Representations for Source Separation",
abstract = "Deep learning based speech enhancement and source separation systems have recently reached unprecedented levels of quality, to the point that performance is reaching a new ceiling. Most systems rely on estimating the magnitude of a target source, either directly or by computing a real-valued mask to be applied to a time-frequency representation of the mixture signal. A limiting factor in such approaches is a lack of phase estimation: the phase of the mixture is most often used when reconstructing the estimated time-domain signal. We propose to estimate phase using »phasebook», a new type of layer based on a discrete representation of the phase difference between the mixture and the target. We also introduce »combook», a similar type of layer that directly estimates a complex mask. We present various training and inference schemes involving these representations, and explain in particular how to include them in an end-to-end learning framework. We also present an oracle study to assess upper bounds on performance for various types of masks using discrete phase representations. We evaluate the proposed methods on the wsj0-2mix dataset, a well-studied corpus for single-channel speaker-independent speaker separation, matching the performance of state-of-the-art mask-based approaches without requiring additional phase reconstruction steps.",
keywords = "deep learning, discrete representation, mask inference, phase estimation, source separation",
author = "Roux, {Jonathan Le} and Gordon Wichern and Shinji Watanabe and Andy Sarroff and Hershey, {John R.}",
year = "2019",
month = "5",
day = "1",
doi = "10.1109/ICASSP.2019.8682587",
language = "English",
series = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "66--70",
booktitle = "2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings",

}

TY - GEN

T1 - The Phasebook

T2 - Building Complex Masks via Discrete Representations for Source Separation

AU - Roux, Jonathan Le

AU - Wichern, Gordon

AU - Watanabe, Shinji

AU - Sarroff, Andy

AU - Hershey, John R.

PY - 2019/5/1

Y1 - 2019/5/1

N2 - Deep learning based speech enhancement and source separation systems have recently reached unprecedented levels of quality, to the point that performance is reaching a new ceiling. Most systems rely on estimating the magnitude of a target source, either directly or by computing a real-valued mask to be applied to a time-frequency representation of the mixture signal. A limiting factor in such approaches is a lack of phase estimation: the phase of the mixture is most often used when reconstructing the estimated time-domain signal. We propose to estimate phase using »phasebook», a new type of layer based on a discrete representation of the phase difference between the mixture and the target. We also introduce »combook», a similar type of layer that directly estimates a complex mask. We present various training and inference schemes involving these representations, and explain in particular how to include them in an end-to-end learning framework. We also present an oracle study to assess upper bounds on performance for various types of masks using discrete phase representations. We evaluate the proposed methods on the wsj0-2mix dataset, a well-studied corpus for single-channel speaker-independent speaker separation, matching the performance of state-of-the-art mask-based approaches without requiring additional phase reconstruction steps.

AB - Deep learning based speech enhancement and source separation systems have recently reached unprecedented levels of quality, to the point that performance is reaching a new ceiling. Most systems rely on estimating the magnitude of a target source, either directly or by computing a real-valued mask to be applied to a time-frequency representation of the mixture signal. A limiting factor in such approaches is a lack of phase estimation: the phase of the mixture is most often used when reconstructing the estimated time-domain signal. We propose to estimate phase using »phasebook», a new type of layer based on a discrete representation of the phase difference between the mixture and the target. We also introduce »combook», a similar type of layer that directly estimates a complex mask. We present various training and inference schemes involving these representations, and explain in particular how to include them in an end-to-end learning framework. We also present an oracle study to assess upper bounds on performance for various types of masks using discrete phase representations. We evaluate the proposed methods on the wsj0-2mix dataset, a well-studied corpus for single-channel speaker-independent speaker separation, matching the performance of state-of-the-art mask-based approaches without requiring additional phase reconstruction steps.

KW - deep learning

KW - discrete representation

KW - mask inference

KW - phase estimation

KW - source separation

UR - http://www.scopus.com/inward/record.url?scp=85068996241&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85068996241&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2019.8682587

DO - 10.1109/ICASSP.2019.8682587

M3 - Conference contribution

AN - SCOPUS:85068996241

T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

SP - 66

EP - 70

BT - 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -