nn-inference-template

ADC23 Talk: Real-time inference of neural networks - a practical approach for DSP engineers

Cover Image for Neural Network Inference Template for Real-Time Cricital Audio Environments - Slides & Overview

This page provides an overview of the resources related to our talk at the Audio Developer Conference 2023. Below you will find the slides and links to the recording and related github repositories.

Talk description

In upcoming audio processing innovations the intersection of neural networks and real-time environments is set to play a decisive role. Our recent experience of implementing neural timbre transfer technology in a real-time setting has presented us with diverse challenges. Overcoming them has provided us with significant insights into the practicalities of inferencing neural networks inside an audio plugin.

This talk presents a pragmatic approach: Starting with a trained model, we guide you through the necessary steps for inferencing the model in a real-time environment. On our way we delve into the critical aspect of maintaining real-time safety, share proven strategies to ensure a seamless and uninterrupted signal flow. Moreover, we address the delicate balance between latency, performance, and stability. For this we utilize three different inference engines: libtorch, tensorflow-lite and onnxruntime. While the in-house solutions for the popular machine learning frameworks PyTorch and TensorFlow, seem obvious choices, sometimes other engines may be better suited for certain use cases. By contrasting the characteristics of the engines, we hope to simplify your decision-making process.

Resources

Cover Image for ADC23 Talk: Source Code

ADC23 Talk: Source Code

The nn-inference-template repository contains the source code for a plugin capable of handling three inference engines concurrently, along with the benchmarks and minimal examples demonstrated at the conference.

Cover Image for Following Up: New Unified Inference Library

Following Up: New Unified Inference Library

Based upon the nn-inference-template, anira aims to unify the process of integrating neural network inference into various real-time audio applications. The library provides a comprehensive API for developers.

Cover Image for ADC23 Talk: Watch Video

ADC23 Talk: Watch Video

Watch the full video of our talk at ADC23 to learn how to implement neural network inference in real-time audio applications.

Cover Image for ADC23 Talk: Inspect Slides

ADC23 Talk: Inspect Slides

Check out the slides from our ADC23 talk to dive into the details of our implementation, methods, and architecture.

About Us

Image for Fares Schulz

Fares Schulz

Image for Valentin Ackva

Valentin Ackva