CHRISTOPHER LANDSCHOOT
Hi, I'm Chris.
I love everything to do with sound.
Welcome to my site.
ABOUT
Personal Profile
I love everything involving music, technology, and sound. I work as a machine learning engineer researching and developing new audio technologies with a focus on deep learning and spatial audio. I have also consulted on the acoustical design of the built environment, with projects ranging from performance venues to recording studios. My work and research include developing and implementing algorithms to perform audio source separation, music sample generation, sound source detection, and binaural audio externalization. I have also developed other software to measure acoustics metrics, model wave behavior, process spatial audio, and create audio effects plugins.
When I am not focused on the design of sound and technology through my work or research, I am writing, recording, or performing music. I play guitar (electric, acoustic, bass), percussion, piano, and even dabble with the banjo. My artist name is After August (check me out on Spotify!), and I play musical styles from jazz to rock to folk and everything in between. I love all types of music.
I live in Chicago with my wife, Steph, and our little dog, Queso. Steph works as an IR Nurse at Northwestern Memorial Hospital assisting with groundbreaking operations daily and Queso takes naps on my lap daily as I work at my desk. I play in the local kickball league and love to escape to the Adirondack Mountains in NY each summer for some hiking, kayaking, and time with the family.
PROFESSIONAL EXPERIENCE
Background & Expertise
August 2023 - Present
Machine Learning Engineer,
Whitebalance
Developing deep learning algorithms for audio source separation, recognition, and enhancement. Building core product technology by implementing state-of-the-art audio source separation frameworks, developing model improvements, and curating audio datasets.
August 2022 - Present
Audio Research Collaborator,
Virtuel Works
Collaboratively develop and implement a post-processing real-time binaural externalization algorithm for object-based spatial audio focused on improving immersive audio experiences without introducing timbral artifacts.
March 2020 - February 2023
Acoustics Consultant,
Threshold Acoustics
Performed acoustic design in room acoustics, sound isolation, and noise control disciplines. Collaborated on a research team developing in-house software to model wave behavior via the finite-difference time-domain method as well as tools for impulse response and general acoustics analysis.
August 2018 - March 2020
Acoustics Specialist,
Kirkegaard Associates
Performed acoustic design in room acoustics, sound isolation, and noise control disciplines. Developed auralization software using Max MSP that can encode, convolve, and decode higher-order ambisonic signals. Created a graphical user interface for client presentation and internal use.
August 2017 - August 2018
Research Assistant,
Rensselaer Polytechnic Institute
Created a novel machine-learning algorithm using Matlab that estimates the directions of arrival and relative levels of an arbitrary number of sound sources using a spherical microphone array. Produced two scientific papers published in peer-reviewed scholastic journals as well as a pending patent.
PROJECTS
Recent Audio & Machine Learning Projects
Built a lightweight system that applies waveform diffusion (1D U-Net) to generate short audio samples (such as drum sounds) to be trained and run on a low-level consumer GPU (<2GB VRAM). The purpose of this project is to provide access to waveform diffusion code for those interested in exploration but who have limited hardware resources. Model training metrics available on W&B.
Built a music source separation system in Python using a band-split recurrent neural network (RNN) framework based on this paper to compete in AIcrowd's Sound Demixing Challenge 2023. Four models (voice, bass, drums, other) were trained on a Google Cloud Platform GPUs with W&B tracking and resulted in an improvement over the baseline model on the "Label Noise" track by 42%.
Wrote and produced a song using AI-generated audio by leveraging the audio diffusion tool Dance Diffusion. Utilized waveform-based (1D convolution) diffusion with unconditional generation and audio style transfer functions to produce high-quality 48kHz samples.
A music genre classification system was created using a convolutional neural network (CNN) in Python and achieved an accuracy rate of 83.8% from a baseline accuracy of 10%. The CNN model was trained on Mel-spectrograms of 3-second audio samples. Additional data augmentation and regularization methods were implemented to optimize performance and mitigate overfitting.
EDUCATION
My Studies
January 2023 - April 2023
Data Science Immersive,
General Assembly
August 2017 - August 2018
GPA: 4.00/4.00
Master of Science, Acoustics,
Rensselaer Polytechnic Institute
August 2012 - May 2016
GPA: 3.50/4.00
Bachelor of Science, Mechanical Engineering,
Minor, Music Performance, Guitar
SUNY University at Buffalo
RESEARCH
SKILLS
Professional Competencies
Programming
-
Python
-
C/C++
-
SQL
-
Git
-
MATLAB
-
Max/MSP/RNBO
-
JUCE
-
Spat (IRCAM)
-
PyTorch
-
TensorFlow
-
Scikit-learn
-
LaTeX
MUSIC & AUDIO
-
Composing
-
Performing
-
Recording
-
Producing
-
Mixing
-
Mastering
-
Pro Tools
-
Audacity
-
Reaper
-
EASE/EASERA
-
Odeon
-
CATT Acoustic
-
Electric, Acoustic, & Bass Guitar
-
Piano
-
Percussion
-
Vocals
-
Banjo
Technical
-
Machine Learning & Deep Learning
-
Digital Signal Processing (DSP)
-
Real-Time Spatial Audio Processing
-
Audio Algorithm Development
-
Audio Data Engineering
-
Acoustics Simulation, Modeling, and Measurement
-
Project Management
-
Technical Writing