### small typos

parent 8c789ab2
 ... ... @@ -18,20 +18,22 @@ samples). Given a sampling frequency $$\fq$$, sound processing manipulates discretized sound signals living in $$\Real^\Rel$$, denoted $$s(t)$$ in this report. The mathematical abstraction that manipulates signals is called \emph{processor}. In sound processing, it is a function $$f:(\Real^\Rel)^p \to (\Real^\Rel)^q$$ that takes input signals $$f:\left(\Real^\Rel\right)^p \to \left(\Real^\Rel\right)^q$$ that takes input signals $$x_i(t), i < p$$ and outputs signals $$y_0(t), \dots y_{q-1}(t = f(x_0(t), \dots x_{p-1}(t)))$$. As convention, $$y_0(t), \dots y_{q-1}(t) = f(x_0(t), \dots x_{p-1}(t)))$$. As convention, $$x(t)$$ will be used for input signals and $$y(t)$$ for output signals.\footnote{All notations are gathered in the appendix \ref{sec:notations}} \flushleft \begin{flushleft} \begin{minipage}{.48\linewidth} \subsection{The Faust Programming Language} \end{minipage}\hspace{\stretch{1}} \begin{minipage}{.2\linewidth} \includegraphics[scale=0.5]{../pictures/LOGO_FAUST_COMPLET_BLEU.png} \end{minipage}\\ \end{flushleft} Faust\cite{faustprogramming} (Functional Audio Stream) is a compiled domain-specific language describing sound processors. Being purely functional, it does not manipulates samples as values in arrays, but works at high level on ... ... @@ -40,7 +42,7 @@ with simple primitives (such as \lstinline{+, *, sin}\dots) that are patched together using a block-diagram algebra (\lstinline{:} for composition, \lstinline{,} for parallel evaluation\dots). The example on \figref{fig:plus-simple} describes a simple stereo-to-mono converter. The two inputs signals are added (primitive \lstinline{+}), then the sum is divided by 2 input signals are added (primitive \lstinline{+}), then the sum is divided by 2 (written \lstinline{/(2)} in Faust). These two basic processors are composed with a \lstinline{:}. Figure \ref{fig:plus-simple} also shows a graphical representation (made by the compiler) of the signal processor described by the ... ... @@ -88,7 +90,6 @@ C++/Rust/D/WebAssembly/LLVM\dots code implementing this formula. % \lstinputlisting{../code/rewsum-lessgolf.dsp}\lstinputlisting{../code/rewsum-alt.dsp}} % {shared addition}{0.48}{../code/rewsum-lessgolf-block.png} % {unshared one}{0.48}{../code/rewsum-alt-block.png} A very important constructor in Faust block-diagram algebra is \lstinline{~}. It allows to recursively define a signal $$s(t)$$ using the signal $$s(t-1)$$ \ie{} $$s(t)$$ delayed by one sample. If we want to implement the ramp signal ... ... @@ -104,7 +105,7 @@ which can be written in Faust as shown in \figref{fig:ramp}. {$$s(t) = 1 + s(t-1)$$}{../code/ramp.dsp} This constructor is very important for Faust expressiveness but it is at the source of many problems, as it creates loops in the signal graph. source of many problems, as it creates loops in the signal graph.\\ \subsection{The FAST\cite{fastproject} Project: Faust on FPGA} \label{sec:fast-project} ... ... @@ -135,7 +136,7 @@ dedicated to audio processing, for instance in an active noise cancellation headphone, samples are processed one after the other, which leads to latency within $$\SI{10}{\micro\second}$$. This has however an obvious drawback. Computers are general-purpose machines, whereas ASICs are application-specific circuit, and circuit are not reprogrammable and very costly application-specific circuits, and circuits are not reprogrammable and very costly to produce. A trade-off between those two worlds can be found in Field-Programmable Gate Arrays (FPGA). An FPGA is an integrated circuit designed to emulate arbitrary digital circuits using a circuit description written in ... ... @@ -167,7 +168,7 @@ applications, such as live artificial reverberation or active noise control. \label{sec:floating-point-vs} As stated above, a Faust program represent processors \ie{} sequences of Real As stated above, a Faust program represent processors \ie sequences of Real numbers. Yet the compiler must output C++ code for which there are no Real numbers representation, so it has to use an approximation of the reals, for instance the floating-point numbers. ... ... @@ -175,36 +176,36 @@ instance the floating-point numbers. Since their standardisation in 1985 \cite{2019ieee7542019} the floating-point formats have become the most frequent arithmetic formats for representing reals in computer programs\cite{goldberg1991whatevery}. Roughly speaking, a floating-point number is a pair $$M, E$$ called mantissa and exponent, floating-point number is a pair $$(M, E)$$ called mantissa and exponent representing the number $$M \times 2^E$$. Floating-point numbers are indeed a wonderful tool for computer arithmetic. They are not only a versatile and well-structured way of representing the real numbers but also a way to do -- albeit from a few elementary rules whose infringement can be catastrophic\cite{muller2010handbookfloatingpoint} -- very quick and dirty brainless Do-What-I-Mean arithmetic, allowing the programmer to focus on something else and thus devising more complex programs. This has however a cost : on many specific cases, the result of the floating-point computation is either embarrassingly over-accurate\footnote{24 bits are used in high-quality audio in DVD and Blue-Rays, where 16 bits are sufficient for Compact Discs.}(even with a dirty implementation) or mostly scrambled (either by using inaccurate inputs ore due to approximation errors) and then the machine spent precious time and energy computing noise. There are nonetheless other formats for representing numbers that were used before floating-point numbers : the fixed-point numbers. The idea is the same as well-structured way of representing the real numbers but also a way to do very quick brainless Do-What-I-Mean arithmetic (except from a few elementary rules whose infringement can be catastrophic\cite{muller2010handbookfloatingpoint}), allowing the programmer to focus on something else and thus devising more complex programs. This has however a cost: on many specific cases, the result of the floating-point computation is either embarrassingly over-accurate\footnote{24 bits are used in high-quality audio in DVD and Blue-Rays, where 16 bits are sufficient for Compact Discs.} (even with a dirty implementation) or mostly scrambled (either by using inaccurate inputs ore due to approximation errors) and then the machine spent precious time and energy computing noise. There are nonetheless other formats that were used before floating-point to represent numbers: the fixed-point numbers. The idea is the same as floating-point numbers, but the exponent is no more part of the representation but rather hard-coded in the program. The main advantage is that fixed-point numbers in the same format behave almost exactly like integers. They require therefore less resources (time, power, silicon) than floats. However, fixed-point numbers being far less versatile than floats, the programmer often have multiple fixed-point formats in the same program, and must handle by hand in software the conversion between multiple formats. This makes the code harder to read. The user must also be cautions when choosing a format, as it can result to over-accurate useless computations, completely scrambled values (just like in therefore less resources (time, power, silicon) than floats. However, fixed-point numbers being far less versatile than floats, the programmer often have multiple fixed-point formats in the same program, and must handle by hand in software the conversion between multiple formats. This makes the code harder to read. The user must also be cautions when choosing a format, as it can result to over-accurate useless computations, completely scrambled values (just like in floats), or overflows (just like ints). Besides, from the hardware point of view, there is often no other choice but using a versatile format : the CPU designer does not know in advance what kind using a versatile format: the CPU designer does not know in advance what kind of program will be executed on its architecture, and it is unthinkable to design one operator per fixed-point format, mainly because there is an infinite number of them and because of silicon scarcity on the chip. This is yet not a problem ... ... @@ -212,7 +213,7 @@ on an FPGA: behaving mostly as a reprogrammable circuit, one can afford to implement very specific operators on very specific formats (e.g. a division by 3 taking as an input an unsigned int over 5 bits and outputting a fixed-point number an unit in last position (ulp) of $$2^{-8}$$ \cite{ugurdag2017hardwaredivision}) without the risk of silicon waste : if \cite{ugurdag2017hardwaredivision}) without the risk of silicon waste: if another very specific arithmetic operator is needed for another very specific application, the FPGA can be reprogrammed for this task. ... ... @@ -272,7 +273,7 @@ equation becomes \end{figure} The range is now $$\left[-2^\msb , 2^\msb - 2^{\lsb}\right]$$, but the smallest representable number (and then the smallest difference \ie{} the ulp) remains representable number (and then the smallest difference \ie the ulp) remains $$2^{\lsb}$$ in absolute value. Very common signed fixed-point formats are $$\sfix(0,16)$$ and $$\sfix(0,24)$$. They are used by Analog-to-Digital and Digital-to-Analog Converters (ADC and DAC) as they represent numbers between -1 ... ...
 ... ... @@ -3,12 +3,12 @@ \usepackage{geometry} \geometry{left=20mm,right=20mm,top=10mm} \geometry{left=20mm,right=20mm,top=10mm, bottom=8mm, heightrounded, includefoot} \usepackage[english]{babel} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} %\usepackage{xcolor} \usepackage{algorithm} ... ... @@ -17,9 +17,11 @@ \lstset{ language=c, basicstyle=\ttfamily, %linewidth=.5\textwidth, %numbers=left, %numberstyle=\small literate={~}{{\raisebox{-.25em}{\textasciitilde}}}{1} % literate={~}{\char `~}{0}, % linewidth=.5\textwidth, % numbers=left, % numberstyle=\small } ... ... @@ -106,7 +108,7 @@ % \renewcommand{\processdelayedfloats}{} % \renewcommand{\doublefig}[]{} \newcommand{\ie}{\emph{i.e.}} \newcommand{\ie}{\emph{i.e.\ }} \DeclareMathOperator{\sfix}{sfix} ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!