Commit fa485785 authored by Antonin Dudermel's avatar Antonin Dudermel
Browse files

putting subsubsections in conclusion to have it more readable

parent 049efe75
......@@ -4,9 +4,9 @@
\section{Conclusion and Future Work}
\label{sec:conclusion}
During this internship we have been devising algorithm to allow the compilation
During this internship we have been devising algorithms to allow the compilation
of Faust programs on targets using fixed-point arithmetic. As fixed-point
formats are very specific, except from being drastically suboptimal, one has to
formats are very specific, to avoid being drastically suboptimal, one has to
choose one fixed-point format (MSB and LSB indexes) per signal. To determine the
MSB index of each signal, we proposed and implemented a bounding method based on
the abstract interpretation of the program on the interval lattice. Yet, as
......@@ -77,20 +77,24 @@ echoes), but are there others ?)
\todo{statistical approach}
\subsubsection{External tools for pattern design}
\label{sec:extern-tools-patt}
The aim of this internship was do devise tools that can be embedded within the
Faust compiler. As this seems to be a tremendous task (to be efficient on LSB,
we should at least re-implement a consequent subset of MPFR in the compiler), we
might consider an approach using external tools to annotate Faust programs
before feeding them to the compiler. With this we could interface
state-of-the-art tools do devise some parts of the Faust program. There is for
instance \cite{adje2021fastefficient}, which succeeds in doing automatic MSB and
LSB computations by describing them as variables on an Integer Linear Program
(ILP) and to prove that any solution on the relaxed problem is necessarily an
integer one using an ILP Meta-theorem. So after having extracted from a given
program an ILP problem, they just need to run on it any solver to get a
nearly-optimal solution. Their work was on a small toy imperative
Turing-complete language, but it might be translated into Faust block-diagram
algebra.
state-of-the-art tools do devise some parts of the Faust program.
There is for instance \cite{adje2021fastefficient}, which does automatic MSB and
LSB computations by describing the MSB and LSB of each value as variables on an
Integer Linear Program (ILP). To solve these ILP, they prove that any solution
on the LP relaxation (the ILP without the integer constraint) of the problem is
an integer solution. So they just need to run on it any LP solver (such as the
simplex) to get a nearly-optimal solution. Their work was on a small toy
imperative Turing-complete language, but it might be translated into Faust
block-diagram algebra.
We also mentioned twice the work in \cite{volkova2017reliableimplementation}.
The algorithm implemented in this thesis is to rewrite the equation
......@@ -124,6 +128,9 @@ characteristics 1. saves the user the need of computing filter coefficients
corresponding to the wanted characteristics and 2. gives one more degree of
freedom to the filter designer that allows to do more efficient accurate filters.
\subsubsection{Unexplored Possibilities}
\label{sec:unexpl-poss}
Although equality saturation\cite{willsey2021eggfast} was not the subject of
this internship, it would be interesting to implement it in the Faust
compiler. As hard as it may be, Faust already needed the complex tools used in
......@@ -133,17 +140,21 @@ graphs equivalences. As equality saturation works very well with lattices, given
an expression, one could compute the most interval-friendly equivalent
expression.
Something we did not study is operator recycling. Indeed, in HLS, when an
Something we did not study is operator sharing. Indeed, in HLS, when an
operator, for instance a floating-point adder, must be used in two parts of a
circuit, the HLS may succeed to use a single circuit to do both computations
(with for instance pipeline). Yet, by designing one format per signal, we
completely loose the redundancy of operators (we will have an adder for inputs
in \(\sfix(0,-23)\), another for inputs in \(\sfix(0, -21)\), etc\dots). So,
even by computing less bits, we could use more silicon. This has been studied in
even by computing fewer bits, we could use more silicon. This has been studied in
\cite{menard2012highlevelsynthesis} using optimisation techniques. The idea is,
by increasing accuracy, computing useless bits but setting many operators at the
same format, in order to use one implementation for many operators.
An other interesting idea from this paper is the use of statistical error,
instead of our worst-case upperbounds. Provided a good statistical distribution
of rounding errors, this enables to consider infrequent extremal cases as rare
and acceptable.
\end{document}
%%% Local IspellDict: en
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment