Commit f8cd2641 authored by Antonin Dudermel's avatar Antonin Dudermel
Browse files

biblio changes

parent 1a831be5
title = {{{IEEE}} 1788-2015 - {{IEEE Standard}} for {{Interval Arithmetic}}},
year = {2015},
file = {/home/antonin/Zotero/storage/HNGLDC3Q/1788-2015.html},
howpublished = {}
title = {Digital Biquad Filter},
year = {2019},
......@@ -11,15 +18,15 @@
language = {en}
title = {{{IEEE}} 754},
year = {2021},
month = apr,
abstract = {The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard. The standard defines: arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs) interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions operations: arithmetic and other operations (such as trigonometric functions) on arithmetic formats exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)IEEE 754-2008, published in August 2008, includes nearly all of the original IEEE 754-1985 standard, plus the IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic. The current version, IEEE 754-2019, was published in July 2019. It is a minor revision of the previous version, incorporating mainly clarifications, defect fixes and new recommended operations.},
annotation = {Page Version ID: 1016159052},
copyright = {Creative Commons Attribution-ShareAlike License},
journal = {Wikipedia},
language = {en}
title = {{{IEEE}} 754-2019 - {{IEEE Standard}} for {{Floating}}-{{Point Arithmetic}}},
year = {2019},
pages = {1--84},
doi = {10.1109/IEEESTD.2019.8766229},
abstract = {This standard specifies interchange and arithmetic formats and methods for binary and decimal floating-point arithmetic in computer programming environments. This standard specifies exception conditions and their default handling. An implementation of a floating-point system conforming to this standard may be realized entirely in software, entirely in hardware, or in any combination of software and hardware. For operations specified in the normative part of this standard, numerical results and exceptions are uniquely determined by the values of the input data, sequence of operations, and destination formats, all under user control.},
file = {/home/antonin/Zotero/storage/7YXEVL98/8766229.html},
journal = {IEEE Std 754-2019 (Revision of IEEE 754-2008)},
keywords = {arithmetic,binary,computer,decimal,exponent,floating-point,Floating-point arithmetic,format,IEEE 754,IEEE Standards,interchange,NaN,number,rounding,significand,subnormal.}
......@@ -78,13 +85,22 @@
file = {/home/antonin/Zotero/storage/3GGBLLIE/CousotCousot-POPL-77-ACM-p238--252-1977.pdf}
title = {Certifying Floating-Point Implementations Using {{Gappa}}},
title = {Certifying the Floating-Point Implementation of an Elementary Function Using {{Gappa}}},
author = {{de Dinechin}, Florent and Lauter, Christoph and Melquiond, Guillaume},
pages = {21},
abstract = {High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.},
file = {/home/antonin/Zotero/storage/NBYNQ7A5/de Dinechin et al. - Certifying floating-point implementations using Gap.pdf},
language = {en}
year = {2011},
month = feb,
volume = {60},
pages = {242--253},
publisher = {{IEEE}},
doi = {10.1109/TC.2010.128},
journal = {IEEE Transactions on Computers},
nourl = {},
number = {2},
url-hal = {},
x-editorial-board = {yes},
x-id-hal = {inria-00533968},
x-international-audience = {yes}
......@@ -132,12 +148,6 @@
number = {1}
title = {{{IEEE}} 1788-2015 - {{IEEE Standard}} for {{Interval Arithmetic}}},
file = {/home/antonin/Zotero/storage/HNGLDC3Q/1788-2015.html},
howpublished = {}
title = {{{InriaForge}}: {{MPFI}}: {{Project Home}}},
file = {/home/antonin/Zotero/storage/PAK6NUNG/mpfi.html},
......@@ -169,12 +169,13 @@ applications, such as live artificial reverberation or active noise control.
\subsection{Implementing the Reals: Floating-Point VS Fixed-Point numbers}
As stated above, a Faust program represent processors \ie{} sequences of Real
numbers. Yet the compuler must output C++ code for which there are no Real
numbers representation, so it has to use an approximation of the reals, for
instance the floating-point numbers.
Since their standardisation in 1985 \cite{2021ieee754} the floating-point
Since their standardisation in 1985 \cite{2019ieee7542019} the floating-point
formats have become the most frequent arithmetic formats for representing reals
in computer programs\cite{goldberg1991whatevery}. Roughly speaking, a
floating-point number is a pair \(M, E\) called mantissa and exponent,
......@@ -18,7 +18,7 @@ Fortunately, bounding signals is something very common and often used in
compilation, for instance to detect at compile time if the index of an array may
be bigger than its size, to guess if a for loop can be efficiently unrolled or
simply to ensure that no integer overflow occurs. A standardized technique is interval
\subsection{Interval Arithmetic in Faust}
......@@ -101,7 +101,7 @@ part.
On the contrary, Gappa uses its set of rewriting rules to explore a space of
semantically equivalent formulae, looking for the more efficient when translated
into intervals\cite{dedinechincertifyingfloatingpoint}. As this space is very
into intervals \cite{dedinechin2011certifyingfloatingpoint}. As this space is very
big, Gappa also embeds a set of heuristics in order to be efficient. Another way
to deal with the combinatorial explosion is to use equality
saturation\cite{willsey2021eggfast}, a procedure iteratively replacing an
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment