Towards computing on energy current

My further discourse with Ed and Ivor last night has resulted in the following message from Ed.

Ed:

Alex,

your yesterday message to Ivor makes me consider once again the mathematical equations Ivor introduces in his paper “The Heaviside Signal”. I concentrate on the equation c(dE/dx) = dE/dt in Appendix 1 (ignoring the ” – ” on the right side). This formula is constructed by interpreting the propagation of a voltage step in space (first diagram) and in time (second diagram) as a velocity v of motion in the direction of time, v = dx/dt, that is, a vector quantity. In the diagrams, this velocity v is symbolized by the letter c. With dx/dt = v the equation c(dE/dx) = dE/dt results in c = dx/dt = v. This asserted equality of c and v then insinuates that the voltage step would really “travel” in space and time with a vector velocity v = c in the direction of time. As I see things, the equation v = c is mistaken, since v is a vector quantity, which c is not. c is a scalar, as is proven by the Poynting vector E when put over p, resulting in c: E/p = c. Vector E over vector p = mv; v is a vector, c is a scalar quantity. Q.e.d. So c does not symbolize some velocity of some motion in the direction of time. What else does this factor mean, as it is  undoubtedly a quotient of a quantity of space, L, over a quantity of time, T, c [L/T] ? If you draw a diagram of Cartesian coordinates, space L in the vertical axis, time T in the horizontal, and you put the elements of space, L, over the elements of time, T, you get the constant quotient [L/T] to characterize the diagram as its parameter, or “grating constant”. Evidently this constant is not a vector, but a scalar. This shows that not every quotient dx/dt [L/T] represents a velocity of propagation in time, as the vector v does it. Rather it may be the case that such a quotient, the more if it is a constant (!), just represents the parameter of the space-time frame of reference wherein an observed phenomenon like the above-mentioned voltage step takes place. And this takes us into the middle of our finding that Poynting’s energy vector E = pxc differs from the classical scalar energy E = mv^2/2. This difference, as we can see now, is a consequence of not distinguishing between velocity v (vector, and variale), and the scalar constant c. Do you agree?

Ed.   

To which I have replied the following:

Ed,

I am afraid I disagree with your conclusion that c coming out of the analysis of c(dE/dx) = dE/dt in Appendix 1  is a vector!

You raise a metaphysical story around it I am afraid.

This c is a constant coefficient. c is not a vector – its scalar.We have vectors around it dE/dt and dE/dx. These are vectors – one is force (or power in modern terms) and the other is momentum (as we agreed with you before). 

Full stop here!

Then, we should acknowledge the fact that there is a physical element behind this c – and this is energy current, which emanates in the universe from its Big Bang!

This is the carrier of interactions in Nature. This wasn’t known to Artistotle, Galileo, Newton, Maxwell … Ivor was and is the first to give this carrier the appropriate place.

If you had known a bit of the physics of communication (I recommend you to read Hans Schantz papers and book for that) you’d realise that its completely natural for communication (or interaction for that matter) to have a carrier – and this carrier for ExH does not need massed matter – it can perfectly well live in vaccuum.

That’s my take on this. You played an important role in this discourse. We have identified what momentum is in Catt’s theory and Heaviside Signal. Eureka!

What I have also discovered thanks to Ivor is the potential way for future computing – which is based NOT on envelope characteristics – such as exponentials and sines, but on discretised steps – this can potentially give way to improving the speed of computations by 2-3 orders of magnitude – we just need appropriate devices to support this speed and react to changes transmitted in energy current. I am already working on this!!!

Alex

 

Ultra-ultra-wide-band Electro-Magnetic computing

I envisage a ‘mothball computer’ – a capsule with the case whose outer surface harvests power from the environment and inside the capsule we have the computational electronics.

High-speed clocking can be provided by EM of highest possible frequency – e.g. by visible light, X-rays or ultimately by gamma rays!

Power supply for modulation electronics can be generated by solar cells – Perovskite cells. Because Perovskite cell have lead in them they can insulate gamma rays from propagation outside the compute capsule.

Information will be in the form of time-modulated super-HF signals.

We will represent information in terms of time-averaged pulse bursts.

We will have a ‘continuum’ range of temporal compute which will operate in the range between deterministic one-shot pulse burst (discrete) through deterministic multi-pulse analog averaged signal to stochastic multi-pulse averaged signal (cf. book by Mars & Poppelbaum – https://www.amazon.co.uk/Stochastic-Deterministic-Averaging-Processes-electronics/dp/0906048443)

Temporal Computing (https://temporalcomputing.com) is the right kind of business opportunity for this Odyssey!

Talking at the AI Workshop held at the Center for AI Research (CAIR) at University of Agder, Norway

I was invited to University of Agder, in the South of Norway (in a nice town called Grimstad, famous for the presence of Henrik Ibsen and Knut Hamsun), to present my vision on what kind of hardware do we need for pervasive AI. This presentation was part of a workshop organised by Prof Ole-Christoffer Granmo, Director of CAIR, on the occasion of the grant opening of CAIR – https://cair.uia.no

In my presentation I emphasized the following points:

  • Pervasive Intelligence requires reconsidering many balances:
    – Between software and hardware
    – Between power and compute
    – Between analogand digital
    – Between design and fabrication and maintenance
  • Granulation phenomenon: Granularity of power, time, data and function
  • Main research questions:
    – Can we granulate intelligence to minimum?
    – What is the smallest level at which we can make cyber-systems learn in terms of power, time, data and function?
  • Grand challenge for pervasive hardware AI:
    To enable electronic components with an ability to learn and compute in real-life environments with real-power and in real-time
  • Research Hypothesis:
    We should design systems that are energy-modulated and self-timed, with maximally distributed learning capabilities

I put a strong hypothesis on the role of using Tsetlin Automata (Automata with Linear Tactics) for building electronics with high-granularity learning capabilities.

The key elements of the proposed approach are:

  • Event-driven, robust to power and timing fluctuations
  • Decentralised TsetlinAutomata (TAs) for learning on demand
  • Mixed digital-analogcompute where elements are enabled and controlled by individual TAs
  • Natural approximation in its nature, both in learning and compute
  • Asynchronous logic for h/w implementation

The full set of my slides is here: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/AlexYakovlev-AI%20Hardware-070219.version3.pdf

My Talk at the RAEng Fellows Day at Newcastle

I was invited to give a talk on my Research at the Royal Academy of Engineering event, held in Newcastle on the 28th January 2019.

The title of the talk is “Asynchronous Design Research or Building Little Clockless Universes

The PDF of the slides of my talk are here: http://async.org.uk/presentations/AlexYakovlev-Research-RAEngEvent-280119.pdf

I only had 15 minutes give to me. Not a lot to talk about the 40 years of research life. So, at some point in preparing for this talk, I decided that I’ll try to explain what the research in microelectronic systems design is about, and in particular how my research in asynchronous design helps it.

Basically, I tried to emphasize on the role of ‘time control’ in designing ‘little universes’, where the time span covered by our knowledge of what’s is going on in those systems and why is between 1 few picoseconds (transistor switching event) and hours if not days (applications life times). So we cover around 10^18 events. How does it compare to the life of universe – being “only” around 10^13 years. Are we as powerful as gods in creating our ‘little universes’.

So, in my research I want to better control TIME at the smallest possible scale, surprisingly but, by going CLOCK-LESS! Clocking creates an illusory notion of determinacy in tracking events and their causal-relationship. Actually, it obscures such information. Instead by doing your circuit design in a disciplined way, such as speed-independent circuit design, you can control timing of events down to the best levels of granularity. In my research I achieved that level of granularity for TIME. It took me some 40 years!

But, furthermore, more recently, say in the last 10 years, I have managed to learn pretty well how to manage power and energy also to that smallest possible level, and actually make sure that energy consumption is known to the level of events controlled in a causal way. Energy/power-modulated computing, and its particular form of power-proportional computing, is the way for that. We can really keep track of where energy goes down to the level of a few femto-Joules. Indeed if a parasitic capacitance of an inverter output in modern CMOS technology is around 10fF and we switch it at Vdd=1V, we are talking about minimum energy quantity of CV^2=10fJ= 10^-14J per charging/discharging cycle (0-1-0 in terms of logic levels). Mobile phones run applications that can consume energy at the level of 10^4J. Again, like with time we seem to be pretty well informed about what’s going on in terms of energy covering 10^18 events! Probably, I’ll just need another 5 or so years to conquer determinacy in energy and power terms – our work on Real-Power Computing is in this direction.

Now, what’s next, you might ask? what other granularification, distribution and decentralization can we conquer in terms of building little universes!? The immediate guess that comes to my mind is the distribution (in time and energy directions) of functionality, and to be more precise intelligence. Can we create the granules of intelligence at the smallest possible scale, and cover same orders of magnitude. It is a hard task. Certainly, for CMOS technology it would be really difficult to imagine that we can force something like a small collection of transistors dynamically learn and optimize its functionality. But there are ways of going pretty close to that. One of them seems to be the direction of learning automata. Read about Tsetlin automata, for example (https://en.wikipedia.org/wiki/Tsetlin_machine) , in the recent work of Ole-Christoffer Granmo.

 

 

 

 

Power-staggered computing

In the past people were trying to develop efficient algorithms for solving complex problems. The efficiency criteria would often be limited to performance, CPU time, or memory size. Today, CPU time or memory is not a problem. What is a problem is to fit your computational solution within bounds of energy resources and yet deliver suffcient quality.

This angle of attack started to rise on the horizon of computing about a decade or so ago when people began to put many CPU/GPU/FPGA and memory cores on a die.

Terms such as power/energy-proportional computing and energy-modulated (my term!) computing began to emerge to address this approach.

What we should look now more at is how to develop algorithms and architectures to compute that are not simply energy-efficient or speedy but that are aware of the information they process, the level and granularity of its importance or significance, as well as aware of the implementation technology underlying the compute architectures.

This is underpinned by the concept of approximate computing and it’s not in the sense of approximating the processed data – say by truncating the data words, but rather approximating the functions that process this data.

For example, instead of (or in addition to) trying to tweak an exact algorithm that works at O(n^3) to work at O(n^2*log2), we can find an approximate, i.e. inexact, algorithm that works at O(n), which could work hand-in-hand with the exact one, but … Those algorithms would be expected to play different roles. The one which is inexact would act as an assistant to the exact one. It would work as a whistle-blower to the latter one. It would give some classification results on the date, at a very low power cost, and then only wake up the exact one when necessary, i.e. when the significance of the processing should go up.

One can think about such power (and performance too!) staggered approach in various contexts.

One such example is shown in the work of our PhD student Dave Burke, who developed a significance-driven image processing method. He detects the significance gradient based on stats measures, such as std deviation (cf. inexact compute algorithm), and makes decision on whether and where to apply more exact computation.

Watch this great video from Dave: https://www.youtube.com/watch?time_continue=1&v=kbKhU7CvEb8   and observe the effects of power-staggered computing!

 

Asynchronous drive from Analog

Run smarter – Live longer!

Breathe smarter – Live longer!

Tick smarter – Live longer!

I could continue listing these slogans for designing better electronics for the era of trillions of devices and peta, exa and zetta bits of information produced on our small planet.

Ultimately it is about how good we are in TIMING our ingestion and processing of information. TIMING has been and will always be a key design factor which will determine other factors such as performance, accuracy, energy efficiency of the system and even productivity of design processes.

As computing spreads into periphery, i.e. it goes into ordinary objects and fills the forms of these objects like water fills the shape of the cup, it would be only natural to think that computing at the peri or edge should be more determined by the nature of the environment rather than rules of computer design dominated the by-going era of compute-centrism. Computing for ages has been quite selfish and tyranic. Its agenda has been set by scaling the size of semiconductor devices and growing complexity of digital part. This scaling process had two important features. One was increasing speed, power consumption which has led to an ongoing growth in data server capacity. The other feature was the only way to manage complexity of the digital circuitry was to use clock in design to avoid potential racing conditions in circuits. As computing reaches the peri it does not need to become as complex and clocky as those compute-centric digital mosters. Computing has to be much more environment friendly. It has to be amenable to the conditions and needs of the environment – otherwise it simply won’t survive!

But the TIMING factor will remain! What will then drive this factor? It won’t certainly only be the scaling of devices and drive for higher throughput by means of clock – why? for example, because we will not be able to provide enough power for that high throughput – there isn’t enough lithium on the planet to make so many batteries. Nor we have enough engineers or technicians to maintain replacing those batteries. But on other hand we don’t need clock to run the digital parts of those peri devices because they will not be that complex. So, where will TIMING come from? One of natural ways of timing these devices is to extract TIMING directly from the environment, and to be precise from the ENERGY flows in the environment.

We have always used a power supply wire in our electronic circuits. Yes, but we have always used it as an always-ON servant, who had to be there to give us 5 Volts or 3 Volts, or more recently 1 Volt or even less (the so-called sub-threshold operation) like 0.4 Volts. That wire or signal has never been much of a signal carrying information value. Why? Well because such information value was always in other signals which would give us either data bits or clock ticks. Today is time to reconsider this traditional thinking and widen our horizon by looking at the power supply signal as a useful information source. Asynchronous or self-timed circuits are fundamentally much more cognizant of the energy flow. Such circuits naturally tune their tick boxes to the power levels and run/breath/tick smarter!

At Newcastle we have been placing asynchronous circuits at the edge with the environment into analog electronics. In particular, it has been power regulation circuits, A-to-D converters and various sensors (voltage, capacitance, …). This way allows significantly reduce the latencies and response times to important events in the analog, reduce sizes of passives (caps and inductors), but perhaps most importantly, thanks to our asynchronous design tools under Workcraft (http://workcraft.org) we have made asynchronous design much more productive. Industrial engineers in the analog domain are falling in love with our tools.

More information can be found here:

http://async.org.uk

https://www.ncl.ac.uk/engineering/research/eee/microsystems/

 

My keynote at Norwegian Nanoelectronics Network Workshop – 13 June 2018

I attended a high stimulating networking workshop in Norway – called Nano-Network

http://www.nano-network.net/workshop/

It was held in an idyllic place on the island called Tjome – south of Oslo.

Lots of excellent talks. Here is the programme:

http://www.nano-network.net/wp-content/uploads/2018/06/Workshop-programme-2018.pdf

and I gave my invited talk on “Bridging Asynchronous Circuits and Analog-Mixed Signal Design”. Here are the slides:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Nano-Micro-2018-Yakovlev-short-no-animation.pdf

The whole event was highly stimulating, with exciting social programme. Challenging adventure towards Verdens Ende (World’s End) with lots of tricky questions and tests on the way. Our team did well … but we weren’t the winners 🙁

 

Energy-vector, momentum, causality, Energy-scalar …

Some more interesting discussions with Ed Dellian has resulted in this ‘summary’, made in context with my current level of understanding of Catt Theory of electromagnetism):

  1. Energy current (E-vector) causes momentum p.
  2. Causality is made via the proportionality coefficient c (speed of energy current)
  3. Momentum p is what mediates between E-vector and changes in the matter.
  4. Momentum p is preserved as energy current hits the matter.
  5. Momentum in the matter presents another form of energy (E-scalar).
  6. E-scalar characterises the elements of the matter as they move with a (material) velocity.
  7. As elements of the matter move they cause changes in Energy current (E-vector) and this forms a fundamental feedback mechanism (which is recursive/fractal …).

Telling this in terms of EM theory and electricity:

  • E-vector (Poynting vector aka Heaviside signal) causes E-scalar (electric current in the matter).
  • This causality between E-vector and E-scalar is mediated by momentum p causing the motion of charges.
  • The motion of charges with material velocity causes changes in E-vector, i.e. the feedback effect mentioned above (e.g. self-induction)

I’d be most grateful if someone refutes these items and bullets.

I also recommend to read my blog (from 2014) on discretisation

On Quantisation and Discretisation of Electromagnetic Effects in Nature

On “Свой – Чужой” (Friend – Foe) paradigm and can we do as good as Nature?

I recently discovered that there is no accurate linguistic translation of the words “Свой” and “Чужой” from Russian to English. A purely semantical translation of “Свой” as “Friend” and  “Чужой” as “Foe” will only be correct in this particular paired context of “Свой – Чужой” as “Friend – Foe”, which sometimes delivers the same idea as “Us – Them”. I am sure there are many idioms that are also translated as the “whole dish” rather than by ingredients.

Anyway, I am not going to discuss here linguistic deficiencies of languages.

I’d rather talk about the concept or paradigm of “Свой – Чужой”, or equally “Friend – Foe”, that we can observe in Nature as a way of enabling living organisms to survive as species through many generations. WHY, for example, one particular species does not produce off-spring as a result of mating with another species? I am sure geneticists would have some “unquestionable’’ answers to this question. But, probably those answers will either be too trivial that they wouldn’t trigger any further interesting technological ideas, or too involved that they’d require studying this subject at length before seeing any connections with non-genetic engineering.  Can we hypothesize about this “Big WHY” by looking at the analogies in technology?

Of course another question crops up as why that particular WHY is interesting and maybe of some use to us engineers.

Well, one particular form of usefulness can be in trying to imitate this “Friend – Foe” paradigm in information processing systems to make them more secure. Basically, what we want to achieve is that if a particular activity has a certain “unique stamp of a kind’’ it can only interact safely and produce meaningful results with another activity of the same kind. As activities or their products lead to other activities we can think of some form of inheritance of the kind, as well as evolution in the form of creating a new kind with another “unique stamp of that kind”.

Look at this process as the physical process driven by energy. Energy enables the production of the offspring actions/data from the actions/data of the similar kind (Friends leading to Friends) or of the new kind, which is again protected from intrusion by the actions/data of others or Foes.

My conjecture is that the DNA mechanisms in Nature underpin this “Friend – Foe” paradigm by applying unique identifiers or DNA keys. In the world of information systems we generate keys (by prime generators and filters to separate them from the already used primes) and use encryption mechanisms. I guess that the future of electronic trading, if we want it to be survivable, is in making available energy flows generate masses of such unique keys and stamp our actions/data in their propagation.

Blockchains are probably already using this “Свой – Чужой” paradigm, do they? I am curious how mother Nature manages to generate these new DNA keys and not run out of energy. Probably there is a hidden reuse there? There should be balance between complexity and productivity somewhere.