My paper “Energy current and computing” is online

Theme issue of the Royal Society Philosophical Transactions A

Celebrating 125 years of Oliver Heaviside’s ‘Electromagnetic Theory’ compiled and edited by Christopher Donaghy-Spargo and Alex Yakovlev is now online:

http://rsta.royalsocietypublishing.org/content/376/2134

My paper “Energy current and computing” is here:

http://rsta.royalsocietypublishing.org/content/376/2134/20170449

 

Power-staggered computing

In the past people were trying to develop efficient algorithms for solving complex problems. The efficiency criteria would often be limited to performance, CPU time, or memory size. Today, CPU time or memory is not a problem. What is a problem is to fit your computational solution within bounds of energy resources and yet deliver suffcient quality.

This angle of attack started to rise on the horizon of computing about a decade or so ago when people began to put many CPU/GPU/FPGA and memory cores on a die.

Terms such as power/energy-proportional computing and energy-modulated (my term!) computing began to emerge to address this approach.

What we should look now more at is how to develop algorithms and architectures to compute that are not simply energy-efficient or speedy but that are aware of the information they process, the level and granularity of its importance or significance, as well as aware of the implementation technology underlying the compute architectures.

This is underpinned by the concept of approximate computing and it’s not in the sense of approximating the processed data – say by truncating the data words, but rather approximating the functions that process this data.

For example, instead of (or in addition to) trying to tweak an exact algorithm that works at O(n^3) to work at O(n^2*log2), we can find an approximate, i.e. inexact, algorithm that works at O(n), which could work hand-in-hand with the exact one, but … Those algorithms would be expected to play different roles. The one which is inexact would act as an assistant to the exact one. It would work as a whistle-blower to the latter one. It would give some classification results on the date, at a very low power cost, and then only wake up the exact one when necessary, i.e. when the significance of the processing should go up.

One can think about such power (and performance too!) staggered approach in various contexts.

One such example is shown in the work of our PhD student Dave Burke, who developed a significance-driven image processing method. He detects the significance gradient based on stats measures, such as std deviation (cf. inexact compute algorithm), and makes decision on whether and where to apply more exact computation.

Watch this great video from Dave: https://www.youtube.com/watch?time_continue=1&v=kbKhU7CvEb8   and observe the effects of power-staggered computing!