Commit 28502411 by Antonin Dudermel

### commentaires Tanguy

parent c697df15
 ... ... @@ -115,7 +115,7 @@ signal processing languages, Faust is not suited for applications on which the latency is critical. This is due to the fact that Faust programs are meant to be executed on computers. When recording sound with a microphone, the sound card of the computer encodes sounds samples before indicating the OS that samples are ready. Yet the Kernel can not afford to have an interruption from the sound card ready. Yet the Kernel cannot afford to have an interruption from the sound card every sample (At $$\SI{44100}{\hertz}$$, it means once every $$\SI{23}{\micro\second}$$), so the sound card buffers samples and uses one interruption to send the whole buffer to the Kernel. Similarly, when the OS ... ... @@ -175,7 +175,7 @@ instance the floating-point numbers. Since their standardisation in 1985 \cite{2019ieee7542019} the floating-point formats have become the most frequent arithmetic formats for representing reals in computer programs\cite{goldberg1991whatevery}. Roughly speaking, a in computer programs \cite{goldberg1991whatevery}. Roughly speaking, a floating-point number is a pair $$(M, E)$$ called mantissa and exponent representing the number $$M \times 2^E$$. Floating-point numbers are indeed a wonderful tool for computer arithmetic. They are not only a versatile and ... ...
 ... ... @@ -113,7 +113,8 @@ example:\footnote{assuming our backend doesn't have a fused-multiply-add} &= (a + b)\times c - \rnd(a + b)\times c + \rnd(a + b)\times c -\rnd(\rnd(a + b)\times c)\\ &= [(a + b) - \rnd(a + b)]\times c + \rnd(\rnd(a + b)\times c) \\ \left[\rnd(a + b)\times c -\rnd(\rnd(a + b)\times c) \right] \\ &= \abserr^+(a, b)\times c + \abserr^\times (\rnd(a + b), c) \\ |f(a, b, c) - \cpt{f}(a,b,c)| &= |\abserr^+(a, b)\times c + ... ... @@ -382,12 +383,13 @@ recursively, we must allow on $$s(t)$$ and $$x(t)$$ errors $$\abserr_s, \abserr_x \leq 2^{\lsb'}$$. Since $$\lsb$$ and $$\lsb'$$ might be different, we must add a rounding before the output, with error $$\abserr_\rnd \leq 2^{\lsb-1}$$. As the fixed-point addition operator is exact, we then have we then have $$\abserr^+ = 0$$ and thus \begin{align*} \abserr_T &= \rnd ((s(t) + \abserr_s) + (x(t) + \abserr_x)) - (s(t) + x(t)) \\ &= s(t) + \abserr_s + x(t) + \abserr_x + \abserr_\rnd - s(t) - x(t)\\ &= \abserr_s + \abserr_x + \abserr_\rnd\\ &= 2^{\lsb'} + 2^{\lsb'} + 2^{\lsb - 1} \\ &= 2^{\lsb' + 1} + 2^{\lsb - 1} \end{align*} ... ...
 ... ... @@ -49,8 +49,8 @@ Note that once again, there is a minimal interval which is where $$\Cvx$$ is the the convex closure. As this interval can sometimes be very costly to compute, relaxed versions can be chosen for more efficiency. For instance, defining $$\cos([a, b]) = [0, 1]$$ for any interval $$X$$ is perfectly correct. It is however very important that be chosen for more efficiency. For instance, defining $$\cos([a, b]) = [-1, 1]$$ for any interval $$[a,b]$$ is perfectly correct. It is however very important that every operation on intervals remains increasing for $$\subseteq$$ \ie for $$X \subseteq X', Y \subseteq Y'$$ we must have $$X \diamond Y \subseteq X' \diamond Y'$$. ... ... @@ -161,12 +161,13 @@ The bell also raises an instructive example with the implementation of $$s(t) =\cos(\omega t)e^{-t/\tau}$$ using a biquad. Looking at the formula, it is obvious that $$|s(t)| \leq 1$$, but except from very naive and costly implementations, this formula does not appear in code and for the biquad, the programmer implicitly relies on the CCDE verified by the formula to implement it efficiently. This is one of the main problems in program analysis. The programmers don't tell to the computer what to do, but only how to do it. Knuth developed this idea with his paradigm of literate programming, arguing that the comments should be more important than the code in itself\footnote{See \TeX{} example \url{https://www-cs-faculty.stanford.edu/~knuth/programs/tex.web}}. programmer implicitly relies on the constant-coefficient different equation verified by the formula to implement it efficiently. This is one of the main problems in program analysis. The programmers do not tell to the computer what to do, but only how to do it. Knuth developed this idea with his paradigm of literate programming, arguing that the comments should be more important than the code in itself\footnote{See \TeX{} example \url{https://www-cs-faculty.stanford.edu/~knuth/programs/tex.web}}. \lstset{language=c} ... ... @@ -229,7 +230,7 @@ To compute the $$S'(t)$$, we could simply transform the recurrence relation $\recSig{i}{s(i) = 0}{s(i+1) = f(s(i)\dots s(i-k))} \text{into} \recSig{i}{S'(i) = 0}{S'(i+1) = f(S'(i)\dots S'(i-k))} \recSig{i}{S'(i) = \bot}{S'(i+1) = f(S'(i)\dots S'(i-k))}$ Note that for any function $$f$$ on signals, the function $$f$$ on interval is ... ... @@ -241,7 +242,7 @@ However, this would not be very convenient. As the sequence $$S'(i)$$ would not necessarily be increasing for $$\subseteq$$, we would have to keep a track of all the $$S'(t)$$ to compute $$\bar{S}'$$. Using instead the relation $\recSig{i}{S(i) = [0]}{S(i+1) = S(i) \vee f(S(i)\dots S(i))}$ $\recSig{i}{S(i) = \bot}{S(i+1) = S(i) \vee f(S(i)\dots S(i))}$ can lead to over-approximations, but ensure that the $$S(t)$$ are increasing and reduce the recurrence relation to an order 1 recurrence and we keep the fact ... ...
No preview for this file type
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!