### spell-check lsb

parent 99227006
 ... ... @@ -14,11 +14,11 @@ numbers on $$w$$ bits potentially gives a number on $$2w$$ bits. Ensuring that every multiplication is exact would lead to a tremendous number of bits at the end of the computations. Worse, there are some very convenient numbers, such as $$0.1$$, that don't have a finite binary representation. This is mainly due to the fact that humans -- and particularily in signal processing where attenuation the fact that humans -- and particularly in signal processing where attenuation is expressed in decibels -- compute in base 10, whereas most of the computers prefer the base 2. So errors are unavoidable, yet it is not an hopless case. In the real (or at So errors are unavoidable, yet it is not an hopeless case. In the real (or at least physical) world, every value, every measure has an uncertainity. But it is not a problem, provided that one can prove that this uncertainity is bounded by a value that is negligible when compared with the result. For instance in sound ... ... @@ -52,9 +52,9 @@ second one is that if we compute a result that is more accurate than because of the small input of the DAC. These two ideas are at the core of FloPoCo's motto: \emph{Compute Just Right''}. Compute right, to have (almost) all the bits of the output correct, but don't do more than what is needed. In other words, every computed bit must be meaningful. Antoher important part of other words, every computed bit must be meaningful. Another important part of FloPoCo philosophy is that one should not write operators depending on a given precision (single, double, quad precision floats) but instad have a generator precision (single, double, quad precision floats) but instead have a generator that can compute operators parameterized by the accuracy of its inputs and outputs. This allow the operators to be more versatile and future-proof. The aim of this part is to apply this philosophy to Faust programs. ... ... @@ -65,7 +65,7 @@ one mathematical operation, there is one generator along with one academic paper proving its correctness. Here, we are dealing with an arbitrary Faust program. The second difference comes from the input part of the audio processing circuit. The dual operator of the DAC is the ADC (for Analog-to-Digital Converter). Its role is to transform analogic data (such as the vibration of the Converter). Its role is to transform analogical data (such as the vibration of the membrane of the microphone) into a digital one. This process, called quantization, leads to unavoidable errors directly in the inputs, where in FloPoCo, one of the work hypotheses is that every input is correct. ... ... @@ -83,7 +83,7 @@ FloPoCo, one of the work hypotheses is that every input is correct. It should be clear by now that \texttt{a + b} does not implement $$a + b$$, mainly because there is absolutely no guarantee that $$a + b$$ can be represented in the same format as $$a$$ and $$b$$. But it can be modelled as follow. One can ensure thate \texttt{a + b} implements $$\rnd(a + b)$$, with follow. One can ensure that \texttt{a + b} implements $$\rnd(a + b)$$, with $$\rnd$$ a \emph{rounding} operator, for instance RN for Rounding-to-Nearest, which replaces $$a + b$$ by the number in the output format that is the nearest to the real mathematical value $$a + b$$. This is very convenient, because this ... ... @@ -123,7 +123,7 @@ example:\footnote{assuming our backend doesn't have a fused-multiply-add} &\leq \maxerr^+ \times c + \maxerr^\times \end{align*} Remark that the error on $$f(a,b,c)$$ depedns on a bound on the value of Remark that the error on $$f(a,b,c)$$ depends on a bound on the value of $$c$$. Error analysis also depends on bounds of values as determined in the previous section. If this technique seems automatic, there are some tricks involved. For instance, why inserting the less-erroneous term ... ... @@ -188,7 +188,7 @@ proofs. \subsection{Local Rules, easy to implement but hard to devise} \label{sec:local-rules} At the begining of this internship, the forseen technique to design accuracy was At the beginning of this internship, the foreseen technique to design accuracy was to use a simple walk on the signal graph, apart maybe from feedback loops. My role was to design a set of rules that would determine the LSB of a node of the signal graph (\ie{} a signal) depending on its type ($$+, \times$$, constant, ... ... @@ -196,7 +196,7 @@ input, etc\dots), the LSB of its input nodes (its arguments if it is an operator) when doing an Input-to-Output approach and the LSB of its output nodes (the operators fed by the signal) when doing an Output-to-Input approach. However, having failed to implement convincing rules, I succeded to show in However, having failed to implement convincing rules, I succeeded to show in which extent this approach is missing relevant information in order to determine LSB. ... ... @@ -205,7 +205,7 @@ LSB. The first natural idea was to start from the inputs and to do computations as precise as possible, in order to avoid roundoff errors. According to this priciple, some rules are very easy to derive, for instance the sum of two principle, some rules are very easy to derive, for instance the sum of two elements with LSB $$\lsb_1, \lsb_2$$ must have a LSB $$\lsb = \min(\lsb_1, \lsb_2)$$, their product must have $$\lsb = \lsb_1 + \lsb_2$$. At this condition, fixed-point multiplication and ... ... @@ -227,11 +227,11 @@ addition are exact (see \figref{fig:scale-best}). \label{fig:scale-best} \end{figure} \faustexfig{Ponderated sum}{\label{fig:pond}}% \faustexfig{Weighted sum}{\label{fig:pond}}% {0.4}{../code/pond-block.png}% {0.5}{../code/pond-sig-old.png}% {}{../code/pond.dsp}% On the ponderated sum example~\figref{fig:pond}, assuming an input LSB of -23 would On the weighted sum example~\figref{fig:pond}, assuming an input LSB of -23 would give an output LSB of -28. Yet this approach raises many questions concerning primitives such as the ... ... @@ -286,7 +286,7 @@ and $$x(t)$$ with LSB $$\lsb, \lsb'$$ such that $$\lsb > \lsb'$$, bits of \label{fig:scale-worst} \end{figure} On the ponderated sum example~\figref{fig:pond}, this would lead to a LSB of On the weighted sum example~\figref{fig:pond}, this would lead to a LSB of $$-23$$ on the whole circuit. If the question of the irrational primitives is still pending, using always the worst precision makes the datapath shrink instead of growing, so the circuit costs far less, and because of the format of ... ... @@ -307,7 +307,7 @@ scale-to-best case is also true, but on each signal. For instance on~\figref{fig:pond} the last sum is between an element of LSB index $$-23$$ and an element of LSB $$-28$$, so the 5 last bits of the bottom part of the sum must be truncated. However, this information is not back-propagated, indicating that those last bits are not usefull. those last bits are not useful. Finally, there are many cases on which the LSB does not represent the error on the signal. The most obvious case is the integer signal which is exact but whose ... ... @@ -315,7 +315,7 @@ LSB is 0. Integers are very easy to detect at compile-time, but this can be generalized to any LSB index using sliders. A slider is a Faust primitive used to declare GUI elements that allows the user to interact with a Faust program (see \figref{fig:sliders}). One can declare a slider with 5 arguments: its name, default value, minimum, maximum and step. The signal outputed by the slider can default value, minimum, maximum and step. The signal outputted by the slider can then take a value between its minimum and maximum that is an integer multiple of the step. A very basic usage is the gain slider: multiplying a signal by a slider between 0 and 1 just before the final output allows to tune the volume at ... ... @@ -394,7 +394,7 @@ we then have So, to ensure $$\abserr_T \leq 2^{\lsb}$$, we need to have $$\lsb' = \lsb - 2$$. Note that the bits at index $$\lsb-1, \lsb-2$$ are finally dropped but they are not useless, they participate in the accuracy of the result before the rounding. These bits are named \emph{guard bits} in the litterature. rounding. These bits are named \emph{guard bits} in the literature. As optimal as this result might be, we can still do better. If we want the result of $$(a_1+ a_2) + (a_3+a_4)$$ with an LSB of -23, applying the rule a ... ... @@ -460,7 +460,7 @@ as-global-as-possible automatic approach. We opted for the following method \begin{enumerate} \item Detect patterns on the graph and contract them into one special node indexed with the pattern \item Apply one of the aformentionned local rules on the contracted graph \item Apply one of the aforementioned local rules on the contracted graph \item Compute the LSB of the elements of each pattern according to the global outputs and inputs of the pattern computed at the above step \end{enumerate} ... ... @@ -490,7 +490,7 @@ useful\cite{volkova2017reliableimplementation}. We fall back on the literal programming problem: once implemented, there is absolutely no clue to tell the compiler that the wanted processor was a biquad. Yet, as biquads are very useful processors, they are already implemented in the standard library. So the biquad implementation of most real-life faust programs is simply calling the implementation of most real-life Faust programs is simply calling the \lstinline{biquad} function of the filters library from the standard library, which is very easy to detect, but this must be done before functions evaluation. ... ... @@ -503,7 +503,7 @@ adding a rounding to $$2^{\lsb}$$ before the output, and doing every computation with $$1 + \lceil \log(n) \rceil$$ guards bits (\ie{} with an LSB index of $$2^{\lsb - 1 - \lceil \log(n) \rceil}$$). For the biquad case, and more generally for any LTI filter, the tool described in \cite{volkova2017reliableimplementation} would also be very usefull. Indeed, \cite{volkova2017reliableimplementation} would also be very useful. Indeed, WCPG are necessary to estimate the error propagation: for a filter $$f$$ with WCPG $$W$$, by linearity \(|f(x(t) + \abserr_x) - f(x(t))| = |f(x(t) + \abserr_x - x(t))| = |f(\abserr_x)| \leq ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!