Static vs Dynamic and Charges vs Fields

There is a constant debate in Electromagnetism between the Charge-based views and Field-based views. I am of course over-simplifying the picture here, at least terminologically. But the main point is that you can talk about EM either from the point of view of; (i) objects that have mass, like electrons, protons, ions etc – I called them collectively charges or charge carriers, or (ii) entities that carry EM energy, like strength of electric and magnetic field, Poyinting vector etc – those are not associated with mass. Both views are often linked to some form of motion, or dynamics. For the world of objects people talk about moving charges, electric current, static charges etc. For the world of fields, people talk about EM waves, TE, TM and TEM, energy current, static field etc.

Often people talk about a mix of both views, and that’s where many paradoxes and contradictions happen. For example, there is an interesting ‘puzzle’ that has been posed to the world by Ivor Catt. It is sometimes called Catt’s question or Catt Anomaly.

http://www.electromagnetism.demon.co.uk/cattq.htm

Basically, the question is about: when a step in voltage is transmitted in a transmission line from a source to the end, according to the classical EM theory charge appears on both wires (+ on the leading wire, and – on the grounded wire): Where does this new charge come from?

Surprisingly, there has not been a convincing answer from anyone that would not violate one or another aspect of the classical EM theory.

Similar to this, there is a challenge posed by Malcolm Davidson, called Heaviside Challenge https://www.oliver-heaviside.com/ that hasn’t also been given a consistent response even though the challenge has been posed with a 5 thousand USD prize!

So it seems that there is a fundamental problem in reconciling the two worlds, in a consistent theory based on physical principles and laws, rather than mathematical abstractions.

However, there is a hope that with the way to understand and explain EM phenomena, especially in high-speed electronic circuits, is through the notion of a Heaviside signal and the principle of energy-current (Poyinting vector) that never ceases from travelling with the speed of light in the medium. In terms of energy current perfect dielectrics are perfect conductors of energy, whereas perfect charge conductors are perfect insulators for EM energy current.

So, while those who prefer the charge based view of the world may continue to talk about static and dynamic charges, those who see the world via energy current live in the world where there is no such a thing as static electric or magnetic field, because TEM signal can only exist in motion with a speed of light in the medium. Medium is characterised by its permittivity and permissibility and gives rise to two principal parameters – speed of light and characteristic impedance. The inherent necessity of the TEM signal to move is stipulated by Galileo/Newton’s principles of geometric proportionality, which effectively define the relations between any change of the field parameter in time with its change in space. Those two changes are linked fundamentally, hence we have the coefficient of proportionality delta_x/delta_t, also known as speed of light, which gives rise to causality between the propagation of energy or information and momenta of force acting on objects with mass.

Another consequence of the ever-moving energy current is its ability to be trapped in a segment of space, pretty much what we can have in a so called capacitor, and thus form an energized fragment of space, that gives rise to an object with mass, e.g. a charged particle such as an electron. So, this corollary of the first principle of energy current paves the way to the view of EM that is based on charged particles.

The Heaviside Prize

Last weekend I twitted on the following exciting challenge:

The Heaviside Prize:

https://youtube.com/watch?v=mr9-Nu5HvWM&feature=youtu.be…

$5000 for someone who will explain the physical reality (without using maths!) of the electric current when a digital step propagates in USB-like transmission line. Students, engineers, academics, tackle this challenge!!!

Correction on my previous blog and some interesting implications …

Andrey Mokhov spotted that to satisfy the actual inverse Pythagorean we need to have alpha=1/2 rather than 2. That’s right. Indeed, what happens is that if we have alpha = 1/2 we would have (1/a)^2=(1/a1)^2+(1/a2)^2. This is what the inverse Pythagorean requires. In that case, for instance if a1=a2=2, then a must be sqrt(2). So the ratio between the individual decay a1=a2 and the collective decay is sqrt(2). For our stack decay under alpha = 2, we would have for a1=a2=2, a=1/2, so the ratio between individual decay and collective decay is 4.

It’s actually quite interesting to look at these relations a bit deeper, and see how the “Pythagorean” (geometric) relationship evolves as we change alpha from something like alpha<=1/2 to alpha>=2.

If we take alpha going to 2 and above, we have the effect of much slower collective decay than 4x compared to the individual decay. Physically this corresponds to the situation when the delay of an inverter in the ring becomes strongly inversely proportional to voltage. Geometrically, this is like contracting the height of the triangle in which sides go further apart than 90 degrees – say the triangle is isosceles for simplicity, and say its angle is say 100 degrees.

The case of alpha = 1/2 corresponds to the case where delay is proportional to the square root of Voltage, and here the stack makes the decay rate to follow the inverse Pythagorean! So this is the case of a triangle with sides being at 90 degrees.

But if alpha goes below 1/2, we have the  effect of the collective decay being closer to individual decays, and geometrically the height of the triangle where sides close up to less than 90 degrees!

Incidentally, Andrey Mokhov suggested we may consider a different physical interpretation for inverse Pythagorean. Instead of looking at lengths a, b and h, one can consider volumes Va, Vb and Vh of 4-D cubes with such side lengths. Then these volumes would relate exactly as in our case of alpha=2, i.e. 1/sqrt (Vh)=1/sqrt(Va)+1/sqrt(Vb).

Cool!


Charge decay in a stack of two digital circuits follows inverse Pythagorean Law!

My last blog about my talk at HDT 2019 on Stacking Asynchronous Circuits contained a link to my slides. I recommend you having a particular look at slide #21. It talks about an interesting fact that a series (stack) discharge rate follows the law of the inverse Pythagorean!

It looks like mother nature caters for a geometric law of the most economic common between two individual sides.

My Talk on Stacked Asynchronous Circuits at HDT 2019

I just attended a Second Workshop on Hardware Design Theory, held in Budapest, collocated with 33rd International Symposium on Distributed Computing http://www.disc-conference.org/wp/disc2019/

The HDT’19 workshop was organised by Moti Medina and Andrey Mokhov. It had a number of invited talks, and here is the programme: https://sites.google.com/view/motimedina/hdt-2019

I gave a talk on Stacked Asynchronous circuits.

Here is the abstract: In this talk we will look at digital circuits from the viewpoint of electrical circuit theory, i.e. as loads to power sources. Such circuits, especially when they are asynchronous can be seen as voltage controlled oscillators. Their switching behaviour, including their operating frequency is modulated by the supply voltage. Interestingly, in the reverse direction if they are driven by external event sources, their switching frequency determines their inherent impedance which itself makes them ideal potentiometers or voltage dividers. Such circuits can be stacked like non-linear resistors in series and parallel, and lend themselves to interesting theoretical and practical results, such as RC circuits with hyperbolic capacitor discharges and designs of dynamic frequency mirrors.

Here is the PDF of my slides: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/stacked-async-budapest-2019.171019.pdf

Towards computing on energy current

My further discourse with Ed and Ivor last night has resulted in the following message from Ed.

Ed:

Alex,

your yesterday message to Ivor makes me consider once again the mathematical equations Ivor introduces in his paper “The Heaviside Signal”. I concentrate on the equation c(dE/dx) = dE/dt in Appendix 1 (ignoring the ” – ” on the right side). This formula is constructed by interpreting the propagation of a voltage step in space (first diagram) and in time (second diagram) as a velocity v of motion in the direction of time, v = dx/dt, that is, a vector quantity. In the diagrams, this velocity v is symbolized by the letter c. With dx/dt = v the equation c(dE/dx) = dE/dt results in c = dx/dt = v. This asserted equality of c and v then insinuates that the voltage step would really “travel” in space and time with a vector velocity v = c in the direction of time. As I see things, the equation v = c is mistaken, since v is a vector quantity, which c is not. c is a scalar, as is proven by the Poynting vector E when put over p, resulting in c: E/p = c. Vector E over vector p = mv; v is a vector, c is a scalar quantity. Q.e.d. So c does not symbolize some velocity of some motion in the direction of time. What else does this factor mean, as it is  undoubtedly a quotient of a quantity of space, L, over a quantity of time, T, c [L/T] ? If you draw a diagram of Cartesian coordinates, space L in the vertical axis, time T in the horizontal, and you put the elements of space, L, over the elements of time, T, you get the constant quotient [L/T] to characterize the diagram as its parameter, or “grating constant”. Evidently this constant is not a vector, but a scalar. This shows that not every quotient dx/dt [L/T] represents a velocity of propagation in time, as the vector v does it. Rather it may be the case that such a quotient, the more if it is a constant (!), just represents the parameter of the space-time frame of reference wherein an observed phenomenon like the above-mentioned voltage step takes place. And this takes us into the middle of our finding that Poynting’s energy vector E = pxc differs from the classical scalar energy E = mv^2/2. This difference, as we can see now, is a consequence of not distinguishing between velocity v (vector, and variale), and the scalar constant c. Do you agree?

Ed.   

To which I have replied the following:

Ed,

I am afraid I disagree with your conclusion that c coming out of the analysis of c(dE/dx) = dE/dt in Appendix 1  is a vector!

You raise a metaphysical story around it I am afraid.

This c is a constant coefficient. c is not a vector – its scalar.We have vectors around it dE/dt and dE/dx. These are vectors – one is force (or power in modern terms) and the other is momentum (as we agreed with you before). 

Full stop here!

Then, we should acknowledge the fact that there is a physical element behind this c – and this is energy current, which emanates in the universe from its Big Bang!

This is the carrier of interactions in Nature. This wasn’t known to Artistotle, Galileo, Newton, Maxwell … Ivor was and is the first to give this carrier the appropriate place.

If you had known a bit of the physics of communication (I recommend you to read Hans Schantz papers and book for that) you’d realise that its completely natural for communication (or interaction for that matter) to have a carrier – and this carrier for ExH does not need massed matter – it can perfectly well live in vaccuum.

That’s my take on this. You played an important role in this discourse. We have identified what momentum is in Catt’s theory and Heaviside Signal. Eureka!

What I have also discovered thanks to Ivor is the potential way for future computing – which is based NOT on envelope characteristics – such as exponentials and sines, but on discretised steps – this can potentially give way to improving the speed of computations by 2-3 orders of magnitude – we just need appropriate devices to support this speed and react to changes transmitted in energy current. I am already working on this!!!

Alex

 

Ultra-ultra-wide-band Electro-Magnetic computing

I envisage a ‘mothball computer’ – a capsule with the case whose outer surface harvests power from the environment and inside the capsule we have the computational electronics.

High-speed clocking can be provided by EM of highest possible frequency – e.g. by visible light, X-rays or ultimately by gamma rays!

Power supply for modulation electronics can be generated by solar cells – Perovskite cells. Because Perovskite cell have lead in them they can insulate gamma rays from propagation outside the compute capsule.

Information will be in the form of time-modulated super-HF signals.

We will represent information in terms of time-averaged pulse bursts.

We will have a ‘continuum’ range of temporal compute which will operate in the range between deterministic one-shot pulse burst (discrete) through deterministic multi-pulse analog averaged signal to stochastic multi-pulse averaged signal (cf. book by Mars & Poppelbaum – https://www.amazon.co.uk/Stochastic-Deterministic-Averaging-Processes-electronics/dp/0906048443)

Temporal Computing (https://temporalcomputing.com) is the right kind of business opportunity for this Odyssey!

Talking at the AI Workshop held at the Center for AI Research (CAIR) at University of Agder, Norway

I was invited to University of Agder, in the South of Norway (in a nice town called Grimstad, famous for the presence of Henrik Ibsen and Knut Hamsun), to present my vision on what kind of hardware do we need for pervasive AI. This presentation was part of a workshop organised by Prof Ole-Christoffer Granmo, Director of CAIR, on the occasion of the grant opening of CAIR – https://cair.uia.no

In my presentation I emphasized the following points:

  • Pervasive Intelligence requires reconsidering many balances:
    – Between software and hardware
    – Between power and compute
    – Between analogand digital
    – Between design and fabrication and maintenance
  • Granulation phenomenon: Granularity of power, time, data and function
  • Main research questions:
    – Can we granulate intelligence to minimum?
    – What is the smallest level at which we can make cyber-systems learn in terms of power, time, data and function?
  • Grand challenge for pervasive hardware AI:
    To enable electronic components with an ability to learn and compute in real-life environments with real-power and in real-time
  • Research Hypothesis:
    We should design systems that are energy-modulated and self-timed, with maximally distributed learning capabilities

I put a strong hypothesis on the role of using Tsetlin Automata (Automata with Linear Tactics) for building electronics with high-granularity learning capabilities.

The key elements of the proposed approach are:

  • Event-driven, robust to power and timing fluctuations
  • Decentralised TsetlinAutomata (TAs) for learning on demand
  • Mixed digital-analogcompute where elements are enabled and controlled by individual TAs
  • Natural approximation in its nature, both in learning and compute
  • Asynchronous logic for h/w implementation

The full set of my slides is here: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/AlexYakovlev-AI%20Hardware-070219.version3.pdf

My Talk at the RAEng Fellows Day at Newcastle

I was invited to give a talk on my Research at the Royal Academy of Engineering event, held in Newcastle on the 28th January 2019.

The title of the talk is “Asynchronous Design Research or Building Little Clockless Universes

The PDF of the slides of my talk are here: http://async.org.uk/presentations/AlexYakovlev-Research-RAEngEvent-280119.pdf

I only had 15 minutes give to me. Not a lot to talk about the 40 years of research life. So, at some point in preparing for this talk, I decided that I’ll try to explain what the research in microelectronic systems design is about, and in particular how my research in asynchronous design helps it.

Basically, I tried to emphasize on the role of ‘time control’ in designing ‘little universes’, where the time span covered by our knowledge of what’s is going on in those systems and why is between 1 few picoseconds (transistor switching event) and hours if not days (applications life times). So we cover around 10^18 events. How does it compare to the life of universe – being “only” around 10^13 years. Are we as powerful as gods in creating our ‘little universes’.

So, in my research I want to better control TIME at the smallest possible scale, surprisingly but, by going CLOCK-LESS! Clocking creates an illusory notion of determinacy in tracking events and their causal-relationship. Actually, it obscures such information. Instead by doing your circuit design in a disciplined way, such as speed-independent circuit design, you can control timing of events down to the best levels of granularity. In my research I achieved that level of granularity for TIME. It took me some 40 years!

But, furthermore, more recently, say in the last 10 years, I have managed to learn pretty well how to manage power and energy also to that smallest possible level, and actually make sure that energy consumption is known to the level of events controlled in a causal way. Energy/power-modulated computing, and its particular form of power-proportional computing, is the way for that. We can really keep track of where energy goes down to the level of a few femto-Joules. Indeed if a parasitic capacitance of an inverter output in modern CMOS technology is around 10fF and we switch it at Vdd=1V, we are talking about minimum energy quantity of CV^2=10fJ= 10^-14J per charging/discharging cycle (0-1-0 in terms of logic levels). Mobile phones run applications that can consume energy at the level of 10^4J. Again, like with time we seem to be pretty well informed about what’s going on in terms of energy covering 10^18 events! Probably, I’ll just need another 5 or so years to conquer determinacy in energy and power terms – our work on Real-Power Computing is in this direction.

Now, what’s next, you might ask? what other granularification, distribution and decentralization can we conquer in terms of building little universes!? The immediate guess that comes to my mind is the distribution (in time and energy directions) of functionality, and to be more precise intelligence. Can we create the granules of intelligence at the smallest possible scale, and cover same orders of magnitude. It is a hard task. Certainly, for CMOS technology it would be really difficult to imagine that we can force something like a small collection of transistors dynamically learn and optimize its functionality. But there are ways of going pretty close to that. One of them seems to be the direction of learning automata. Read about Tsetlin automata, for example (https://en.wikipedia.org/wiki/Tsetlin_machine) , in the recent work of Ole-Christoffer Granmo.

 

 

 

 

Power-staggered computing

In the past people were trying to develop efficient algorithms for solving complex problems. The efficiency criteria would often be limited to performance, CPU time, or memory size. Today, CPU time or memory is not a problem. What is a problem is to fit your computational solution within bounds of energy resources and yet deliver suffcient quality.

This angle of attack started to rise on the horizon of computing about a decade or so ago when people began to put many CPU/GPU/FPGA and memory cores on a die.

Terms such as power/energy-proportional computing and energy-modulated (my term!) computing began to emerge to address this approach.

What we should look now more at is how to develop algorithms and architectures to compute that are not simply energy-efficient or speedy but that are aware of the information they process, the level and granularity of its importance or significance, as well as aware of the implementation technology underlying the compute architectures.

This is underpinned by the concept of approximate computing and it’s not in the sense of approximating the processed data – say by truncating the data words, but rather approximating the functions that process this data.

For example, instead of (or in addition to) trying to tweak an exact algorithm that works at O(n^3) to work at O(n^2*log2), we can find an approximate, i.e. inexact, algorithm that works at O(n), which could work hand-in-hand with the exact one, but … Those algorithms would be expected to play different roles. The one which is inexact would act as an assistant to the exact one. It would work as a whistle-blower to the latter one. It would give some classification results on the date, at a very low power cost, and then only wake up the exact one when necessary, i.e. when the significance of the processing should go up.

One can think about such power (and performance too!) staggered approach in various contexts.

One such example is shown in the work of our PhD student Dave Burke, who developed a significance-driven image processing method. He detects the significance gradient based on stats measures, such as std deviation (cf. inexact compute algorithm), and makes decision on whether and where to apply more exact computation.

Watch this great video from Dave: https://www.youtube.com/watch?time_continue=1&v=kbKhU7CvEb8   and observe the effects of power-staggered computing!