Celebrating 125 years of Oliver Heaviside’s ‘Electromagnetic Theory’ compiled and edited by Christopher Donaghy-Spargo and Alex Yakovlev is now online:

http://rsta.royalsocietypublishing.org/content/376/2134

My paper “Energy current and computing” is here:

http://rsta.royalsocietypublishing.org/content/376/2134/20170449

]]>

This angle of attack started to rise on the horizon of computing about a decade or so ago when people began to put many CPU/GPU/FPGA and memory cores on a die.

Terms such as power/energy-proportional computing and energy-modulated (my term!) computing began to emerge to address this approach.

What we should look now more at is how to develop algorithms and architectures to compute that are not simply energy-efficient or speedy but that are aware of the information they process, the level and granularity of its importance or significance, as well as aware of the implementation technology underlying the compute architectures.

This is underpinned by the concept of approximate computing and it’s not in the sense of approximating the processed data – say by truncating the data words, but rather approximating the functions that process this data.

For example, instead of (or in addition to) trying to tweak an exact algorithm that works at O(n^3) to work at O(n^2*log2), we can find an approximate, i.e. inexact, algorithm that works at O(n), which could work hand-in-hand with the exact one, but … Those algorithms would be expected to play different roles. The one which is inexact would act as an assistant to the exact one. It would work as a whistle-blower to the latter one. It would give some classification results on the date, at a very low power cost, and then only wake up the exact one when necessary, i.e. when the significance of the processing should go up.

One can think about such power (and performance too!) staggered approach in various contexts.

One such example is shown in the work of our PhD student Dave Burke, who developed a significance-driven image processing method. He detects the significance gradient based on stats measures, such as std deviation (cf. inexact compute algorithm), and makes decision on whether and where to apply more exact computation.

Watch this great video from Dave: https://www.youtube.com/watch?time_continue=1&v=kbKhU7CvEb8 and observe the effects of power-staggered computing!

]]>

Breathe smarter – Live longer!

Tick smarter – Live longer!

I could continue listing these slogans for designing better electronics for the era of trillions of devices and peta, exa and zetta bits of information produced on our small planet.

Ultimately it is about how good we are in TIMING our ingestion and processing of information. TIMING has been and will always be a key design factor which will determine other factors such as performance, accuracy, energy efficiency of the system and even productivity of design processes.

As computing spreads into periphery, i.e. it goes into ordinary objects and fills the forms of these objects like water fills the shape of the cup, it would be only natural to think that computing at the peri or edge should be more determined by the nature of the environment rather than rules of computer design dominated the by-going era of compute-centrism. Computing for ages has been quite selfish and tyranic. Its agenda has been set by scaling the size of semiconductor devices and growing complexity of digital part. This scaling process had two important features. One was increasing speed, power consumption which has led to an ongoing growth in data server capacity. The other feature was the only way to manage complexity of the digital circuitry was to use clock in design to avoid potential racing conditions in circuits. As computing reaches the peri it does not need to become as complex and clocky as those compute-centric digital mosters. Computing has to be much more environment friendly. It has to be amenable to the conditions and needs of the environment – otherwise it simply won’t survive!

But the TIMING factor will remain! What will then drive this factor? It won’t certainly only be the scaling of devices and drive for higher throughput by means of clock – why? for example, because we will not be able to provide enough power for that high throughput – there isn’t enough lithium on the planet to make so many batteries. Nor we have enough engineers or technicians to maintain replacing those batteries. But on other hand we don’t need clock to run the digital parts of those peri devices because they will not be that complex. So, where will TIMING come from? One of natural ways of timing these devices is to extract TIMING directly from the environment, and to be precise from the ENERGY flows in the environment.

We have always used a power supply wire in our electronic circuits. Yes, but we have always used it as an always-ON servant, who had to be there to give us 5 Volts or 3 Volts, or more recently 1 Volt or even less (the so-called sub-threshold operation) like 0.4 Volts. That wire or signal has never been much of a signal carrying information value. Why? Well because such information value was always in other signals which would give us either data bits or clock ticks. Today is time to reconsider this traditional thinking and widen our horizon by looking at the power supply signal as a useful information source. Asynchronous or self-timed circuits are fundamentally much more cognizant of the energy flow. Such circuits naturally tune their tick boxes to the power levels and run/breath/tick smarter!

At Newcastle we have been placing asynchronous circuits at the edge with the environment into analog electronics. In particular, it has been power regulation circuits, A-to-D converters and various sensors (voltage, capacitance, …). This way allows significantly reduce the latencies and response times to important events in the analog, reduce sizes of passives (caps and inductors), but perhaps most importantly, thanks to our asynchronous design tools under Workcraft (http://workcraft.org) we have made asynchronous design much more productive. Industrial engineers in the analog domain are falling in love with our tools.

More information can be found here:

https://www.ncl.ac.uk/engineering/research/eee/microsystems/

]]>

Inevitably, it ”suffers” from approximation and abstraction compared to physical reality. A bit like an impressionist painting reflects the real picture.

The question is what and how much is sacrificed here.

One test of whether the sacrifice is acceptable or not is in the way how people, while using mathematics, can build physical objects such as airplanes, cars, bridges, radios, computers etc. If they can and at a reasonable cost, then the language is adequate to the purpose.

For example, it seems that the mathematical language of Heaviside’s operational calculus is sufficient for the purposes of designing and analysing electrical circuits of good quality and in an acceptable time.

Another example, the language of Boolean algebra is sufficient to design logic circuits if we clock them safely so that they don’t produce any hazards. If, however we don’t clock them safely, we need other ways to describe causal relationships between events, such as Signal Transition Graphs.

]]>

The thesis is spread between several files (total – 255 pages):

Title, Contents and Introduction:

Chapter 2 (Formalization of the behaviour of interacting objects and communication protocols):

Chapter 4 (Organization of aperiodic interface of intermodular communication):

(1) Example of context procedure

(2) Example of controlled protocol

(3) Application of Petri nets to specification of asynchronous discrete structures

(4) Information transfer on three-state lines

(5) Analysis and implementation of the TRIMOSBUS interface

Exploitation confirmation letter from Ufa plant

]]>

I gave an invited talk on “Asynchronous Design for IoT” where I also showed retrospectively some history of developments in the field of asynchronous system design where I have been involved for nearly 40 years, first in St Petersburg and then in Newcastle.

The slides of my talk can be found here: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Asynchronous%20Design%20for%20IoT%20-AlexY%20-%20ALIOT2018.pdf

]]>

http://www.nano-network.net/workshop/

It was held in an idyllic place on the island called Tjome – south of Oslo.

Lots of excellent talks. Here is the programme:

http://www.nano-network.net/wp-content/uploads/2018/06/Workshop-programme-2018.pdf

and I gave my invited talk on “Bridging Asynchronous Circuits and Analog-Mixed Signal Design”. Here are the slides:

The whole event was highly stimulating, with exciting social programme. Challenging adventure towards Verdens Ende (World’s End) with lots of tricky questions and tests on the way. Our team did well … but we weren’t the winners

]]>

The slides in PDF are here:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Yakovlev-DESSERT2018-Kyiv.pdf

]]>See http://www.async2018.wien/

I gave an invited ‘bridging’ keynote “Async-Analog: Happy Cross-talking?”.

Here are the slides in pdf:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/ASYNC18-FAC18-keynote-AY-last.pdf

]]>

- Energy current (
**E-vector**) causes**momentum p**. **Causality**is made via the proportionality coefficient c (**speed of energy current**)- Momentum p is what mediates between E-vector and changes in the
**matter**. - Momentum p is preserved as energy current hits the matter.
- Momentum in the matter presents another form of energy (
**E-scalar**). - E-scalar characterises the elements of the matter as they move with
**a (material) velocity**. - As elements of the matter move they cause changes in Energy current (E-vector) and this forms a
*fundamental feedback mechanism*(which is recursive/fractal …).

Telling this in terms of **EM theory and electricity**:

- E-vector (Poynting vector aka Heaviside signal) causes E-scalar (electric current in the matter).
- This causality between E-vector and E-scalar is mediated by momentum p causing the motion of charges.
- The motion of charges with material velocity causes changes in E-vector, i.e. the feedback effect mentioned above (e.g. self-induction)

I’d be most grateful if someone refutes these items and bullets.

I also recommend to read my blog (from 2014) on discretisation

]]>On Quantisation and Discretisation of Electromagnetic Effects in Nature

Ed is one of the very few modern philosophers and science historians who read Newton’s Principia in original (and produced his own translation of Principia to German – published in 1988).

Ed’s position is that the real physical (Nature’s) laws reflect cause and effect in the form of **geometric proportionality**. The most fundamental being E/p=c, where E is energy, p is momentum and c is velocity – a proportionality coefficient, i.e. a constant associated with space over time. This view is in line with the Poynting vector understanding of electromagnetism, also accepted by Heaviside in his notion of ‘energy current’. It even is the basis of Einstein’s E/mc = c.

The diversion from geometric proportionality towards **arithmetic proportionality** was due to Leibniz and his principle of “*causa aequat effectum*“. According to Ed (I am quoting him here) – “*it is a principle that has nothing to do with reality, since it implies “instantanity” of interaction, that is, interaction independently of “real space” and “real time”, conflicting with the age-old natural experience expressed by Galileo that “nothing happens but in space and time”* “. It is therefore important to see how Maxwellian electromagnetism is seen by scholars. For example, Faraday’s law states an *equivalence of* EMF and the rate of change of magnetic flux – it is not a geometric proportion, hence it is not causal!

My view, which is based on my experience with electronic circuits and my understanding of causality between and energy and information transfer (state-changes), where energy is cause and information transfer is effect, is in agreement with geometric proportionality. Energy causes state-transitions in space-time. This is what I call energy-modulated computing. It is challenging to refine this proportionality in every real problem case!

If you want to know more about Ed Dellian’s views, I recommend visiting his site http://www.neutonus-reformatus.de which contains several interesting papers.

]]>

First of all, any man-made statements about real world phenomena are not strictly speaking physical, because they are formulated by humans within their perceptions, or, whether we want it or not, models, of the real world. So, even if we use English to express our perceptions we already depart from the “real physics”. Mathematics is just a man-made form of expression that is underpinned by some mathematical rigour.

Now let’s get back to the interpretation of the “causes” (or causality) relations. It is often synonymized with the “gives rise” relation. Such relations present a lot of confusion if they originate from the interpretation of mathematical equations. For example, Faraday’s law in mathematical form, curl (E) = – dB/dt, does not say anything about the RHS causing or giving rise to the LHS. (Recall that B is proportional to H with the permeability of the medium being the coefficient of proportionality.)

The interpretation problem, when taken outside pure mathematics leads to the question, for example, of HOW QUICKLY the RHS causes the LHS? And, here we have no firm answer. The question of “how quickly does the cause have an effect” is very much physical (yet neither Faraday nor Maxwell state anything about it!), because we are used to think that if A causes B, then we imply some temporal precedence between the event associated with A and the event associated with B. We also know that it is unlikely that this ‘causal precedence’ will effect faster than the speed of light (we haven’t seen any other evidence of information signalling acting faster than the speed of light!). Hence, the causality with the speed of light is something that may be the result of our causal interpretation. But, then this is probably wrong to assume that Faraday or Maxwell gave this sort of interpretation to the above relationship.

Worth thinking about causality, isn’t it?

I have no clear answer, but in my opinion, reading the original materials on electromagnetic theory, such as Heaviside’s volumes, rather than modern textbooks would be a good recipe!

I recommend anyone interested in this debatable matter check out Ivor Catt’s view on it:

http://www.ivorcatt.co.uk/x18j73.pdf

http://www.ivorcatt.co.uk/x18j184.pdf

To the best of my knowledge, Catt was the first to have noticed and wrote about the fact that modern texts on electromagnetism actively use the ’causes’ interpretation of Maxwell’s equations. He also claims that such equations are “obvious truisms about any body or material moving in space”. The debatable matter may then start to move from the question of the legitimacy of the causal interpretation of these equations towards the question of how useful these equations are for actual understanding of electromagnetism …

]]>

Anyway, I am not going to discuss here linguistic deficiencies of languages.

I’d rather talk about the concept or paradigm of “Свой – Чужой”, or equally “Friend – Foe”, that we can observe in Nature as a way of enabling living organisms to survive as species through many generations. WHY, for example, one particular species does not produce off-spring as a result of mating with another species? I am sure geneticists would have some “unquestionable’’ answers to this question. But, probably those answers will either be too trivial that they wouldn’t trigger any further interesting technological ideas, or too involved that they’d require studying this subject at length before seeing any connections with non-genetic engineering. Can we hypothesize about this “Big WHY” by looking at the analogies in technology?

Of course another question crops up as why that particular WHY is interesting and maybe of some use to us engineers.

Well, one particular form of usefulness can be in trying to imitate this “Friend – Foe” paradigm in information processing systems to make them more secure. Basically, what we want to achieve is that if a particular activity has a certain “unique stamp of a kind’’ it can only interact safely and produce meaningful results with another activity of the same kind. As activities or their products lead to other activities we can think of some form of inheritance of the kind, as well as evolution in the form of creating a new kind with another “unique stamp of that kind”.

Look at this process as the physical process driven by energy. Energy enables the production of the offspring actions/data from the actions/data of the similar kind (Friends leading to Friends) or of the new kind, which is again protected from intrusion by the actions/data of others or Foes.

My conjecture is that the DNA mechanisms in Nature underpin this “Friend – Foe” paradigm by applying unique identifiers or DNA keys. In the world of information systems we generate keys (by prime generators and filters to separate them from the already used primes) and use encryption mechanisms. I guess that the future of electronic trading, if we want it to be survivable, is in making available energy flows generate masses of such unique keys and stamp our actions/data in their propagation.

Blockchains are probably already using this “Свой – Чужой” paradigm, do they? I am curious how mother Nature manages to generate these new DNA keys and not run out of energy. Probably there is a hidden reuse there? There should be balance between complexity and productivity somewhere.

]]>**“Internet of Things Technology Market by Node Component (Processor, Sensor, Connectivity IC, Memory Device, and Logic Device), Network Infrastructure, Software Solution, Platform, Service, End-use Application and Geography – Global Forecast to 2022”**

https://www.researchandmarkets.com/research/hld477/internet_of

“*IoT technology market expected to grow at a CAGR of 25.1% during the forecast period”*

“The IoT technology market is expected to be valued at USD 639.74 billion by 2022, growing at a CAGR of 25.1% from 2017 to 2022. The growth of the IoT technology market can be attributed to the growing market of connected devices and increasing investments in the IoT industry. However, the lack of common communication protocols and communication standards across platforms, and high-power consumption by connected devices are hindering the growth of the IoT technology market.”

]]>This article studies the interplay between the performance, energy, and reliability (PER) of parallel-computing systems. It describes methods supporting the meaningful cross-platform analysis of this interplay. These methods lead to the PER software tool, which helps designers analyze, compare, and explore these properties. The web extra at https://youtu.be/aijVMM3Klfc illustrates the PER (performance, energy, and reliability) tool, expanding on the main engineering principles described in the article.

The PER tool can be found here:

www.async.org.uk/prime/PER/per.html

Open access paper version is here:

http://eprint.ncl.ac.uk/file_store/production/231220/F814D1A8-84ED-4996-A2C1-6BD3763E6456.pdf

]]>