Bridging Async and Analog at ASYNC 2018 and FAC 2018 in Vienna

I attended ASYNC 2018 and FAC 2018 in Vienna in May. It was the first time these two event were collocated back to back, with FAC (Frontiers of Analog CAD) to follow ASYNC.

See http://www.async2018.wien/

I gave an invited ‘bridging’ keynote “Async-Analog: Happy Cross-talking?”.

Here are the slides in pdf:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/ASYNC18-FAC18-keynote-AY-last.pdf

 

 

 

Energy-vector, momentum, causality, Energy-scalar …

Some more interesting discussions with Ed Dellian has resulted in this ‘summary’, made in context with my current level of understanding of Catt Theory of electromagnetism):

  1. Energy current (E-vector) causes momentum p.
  2. Causality is made via the proportionality coefficient c (speed of energy current)
  3. Momentum p is what mediates between E-vector and changes in the matter.
  4. Momentum p is preserved as energy current hits the matter.
  5. Momentum in the matter presents another form of energy (E-scalar).
  6. E-scalar characterises the elements of the matter as they move with a (material) velocity.
  7. As elements of the matter move they cause changes in Energy current (E-vector) and this forms a fundamental feedback mechanism (which is recursive/fractal …).

Telling this in terms of EM theory and electricity:

  • E-vector (Poynting vector aka Heaviside signal) causes E-scalar (electric current in the matter).
  • This causality between E-vector and E-scalar is mediated by momentum p causing the motion of charges.
  • The motion of charges with material velocity causes changes in E-vector, i.e. the feedback effect mentioned above (e.g. self-induction)

I’d be most grateful if someone refutes these items and bullets.

I also recommend to read my blog (from 2014) on discretisation

On Quantisation and Discretisation of Electromagnetic Effects in Nature

Real Nature’s proportionality is geometric: Newton’s causality

I recently enjoyed e-mail exchanges with Ed Dellian.

Ed is one of the very few modern philosophers and science historians who read Newton’s Principia in original (and produced his own translation of Principia to German – published in 1988).

Ed’s position is that the real physical (Nature’s) laws reflect cause and effect in the form of geometric proportionality. The most fundamental being E/p=c, where E is energy, p is momentum and c is velocity – a proportionality coefficient, i.e. a constant associated with space over time.  This view is in line with the Poynting vector understanding of electromagnetism, also accepted by Heaviside in his notion of ‘energy current’. It even is the basis of Einstein’s E/mc = c.

The diversion from geometric proportionality towards arithmetic proportionality was due to Leibniz and his principle of “causa aequat effectum“. According to Ed (I am quoting him here)  – “it is a principle that has nothing to do with reality, since it implies “instantanity” of interaction, that is, interaction independently of “real space” and “real time”, conflicting with the age-old natural experience expressed by Galileo that “nothing happens but in space and time” “. It is therefore important to see how Maxwellian electromagnetism is seen by scholars. For example, Faraday’s law states an equivalence of EMF and the rate of change of magnetic flux – it is not a geometric proportion, hence it is not causal!

My view, which is based on my experience with electronic circuits and my understanding of causality between and energy and information transfer (state-changes), where energy is cause and information transfer is effect, is in agreement with geometric proportionality. Energy causes state-transitions in space-time. This is what I call energy-modulated computing. It is challenging to refine this proportionality in every real problem case!

If you want to know more about Ed Dellian’s views, I recommend visiting his site http://www.neutonus-reformatus.de  which contains several interesting papers.

 

 

 

 

A causes B – what does it mean?

There is a debatable issue that concerns the presence of causality in the interpretation of some physical relationships, such as those involved in electromagnetism. For example, “the dynamic change in magnetic field H causes the emergence of electric field E”. This is a common interpretation of one of the key Maxwell’s equations (originating in Faraday’s law). What does this “causes” mean? Is the meaning purely mathematical or is it more fundamental, or physical?

First of all, any man-made statements about real world phenomena are not strictly speaking physical, because they are formulated by humans within their perceptions, or, whether we want it or not, models, of the real world. So, even if we use English to express our perceptions we already depart from the “real physics”. Mathematics is just a man-made form of expression that is underpinned by some mathematical rigour.

Now let’s get back to the interpretation of the “causes” (or causality) relations. It is often synonymized  with the “gives rise” relation. Such relations present a lot of confusion if they originate from the interpretation of mathematical equations. For example, Faraday’s law in mathematical form, curl (E) = – dB/dt,  does not say anything about the RHS causing or giving rise to the LHS. (Recall that B is proportional to H with the permeability of the medium being the coefficient of proportionality.)

The interpretation problem, when taken outside pure mathematics leads to the question, for example, of HOW QUICKLY the RHS causes the LHS? And, here we have no firm answer. The question of “how quickly does the cause have an effect” is very much physical (yet neither Faraday nor Maxwell state anything about it!), because we are used to think that if A causes B, then we imply some temporal precedence between the event associated with A and the event associated with B. We also know that it is unlikely that this ‘causal precedence’ will effect faster than the speed of light (we haven’t seen any other evidence of information signalling acting faster than the speed of light!). Hence, the causality with the speed of light is something that may be the result of our causal interpretation. But, then this is probably wrong to assume that Faraday or Maxwell gave this sort of interpretation to the above relationship.

Worth thinking about causality, isn’t it?

I have no clear answer, but in my opinion, reading the original materials on electromagnetic theory, such as Heaviside’s volumes, rather than modern textbooks would be a good recipe!

I recommend anyone interested in this debatable matter check out Ivor Catt’s view on it:

http://www.ivorcatt.co.uk/x18j73.pdf

http://www.ivorcatt.co.uk/x18j184.pdf

To the best of my knowledge, Catt was the first to have noticed and wrote about the fact that modern texts on electromagnetism actively use the ’causes’ interpretation of Maxwell’s equations. He also claims that such equations are “obvious truisms about any body or material moving in space”.  The debatable matter may then start to move from the question of the legitimacy of the causal interpretation of these equations towards the question of how useful these equations are for actual understanding of electromagnetism …

 

 

 

 

Electromagnetic Compatibility event (EMC-COMPO’17) in St. Petersburg

A very interesting workshop was held in my Alma Mater (LETI – Electrotechnical Universrity) in Saint Petersburg, Russia on 4-8 July 2017.

https://emccompo2017.eltech.ru

The workshop contained lots of interesting presentations – largely from industry and largely on modelling and empirical measurements of the EM interference in microsystems and ICs. Basically, the problem of reuse and block replacement is huge due to the unpredictability of the EM effects between components on PCB and on chip.

Here are the presentations:

https://emccompo2017.eltech.ru/results/presentations

Milos Krstic (from IHP) and I gave a keynote talk, which consisted of two parts:

(1) Digital Systems Clocking with and without clock: a historical retrospective (emphasizing the role of researchers from LETI – mostly Victor Varshavsky’s group where I used to work in the 1980s)

http://www.eltech.ru/assets/files/en/emccompo-2017/presentations/25-Digital-Systems-Clocking-with-and-without-clock.pdf

(2) Main technical contribution: Reducing Switching Noise Effects by Advanced Clock Management: M. Krstic, X. Fan, M. Babic, E. Grass, T. Bjerregaard, A. Yakovlev

http://www.eltech.ru/assets/files/en/emccompo-2017/presentations/03-Reducing-Switching-Noise-Effects.pdf

 

Talking at the 2016 ARM Research Summit

Last week there was an inaugural ARM Research Summit.

https://developer.arm.com/research/summit

I gave a talk on Power & Compute Codesign for “Little Digital” Electronics.

Here are the slides of this talk:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/Power-and-Compute-Talk

Here is the abstract of my talk:

Power and Compute Codesign for “Little Digital” Electronics

Alex Yakovlev, Newcastle University

alex.yakovlev@ncl.ac.uk

The discipline of electronics and computing system design has traditionally separated power management (regulation, delivery, distribution) from data-processing (computation, storage, communication, user interface). Power control has always been a prerogative of power engineers who designed power supplies for loads that were typically defined in a relatively crude way.

 

In this talk, we take a different stance and address upcoming electronics systems (e.g. Internet of Things nodes) more holistically. Such systems are miniaturised to the level that both power management and data-processing are virtually inseparable in terms of their functionality and resources, and the latter are getting scarce. Increasingly, both elements share the same die, and the control of power supply, or what we call here a “little digital” organ, also shares the same silicon fabric as the power supply. At present, there are no systematic methods or tools for designing “little digital” that could ensure that it performs its duties correctly and efficiently.  The talk will explore the main issues involved in formulating the problem of and automating the design of little digital circuits, such as models of control circuits and the controlled plants, definition and description of control laws and optimisation criteria, characterisation of correctness and efficiency, and applications such as biomedical implants, IoT ‘things’ and WSN nodes.

 

Our particular focus in this talk will be on power-data convergence and ways of designing energy-modulated systems [1].  In such systems, the incoming flow of energy will largely determine the levels of switching activity, including data processing – this is fundamentally different from the conventional forms where the energy aspect simply acts as a cost function for optimal design or run-time performance.

 

We will soon be asking ourselves questions like these: For a given silicon area and given data processing functions, what is the best way to allocate silicon to power and computational elements? More specifically, for a given energy supply rate and given computation demands, which of the following system designs would be better? One that involves a capacitor network for storing energy, and investing energy into charging and discharging flying capacitors through computational electronics which would be able to sustain high fluctuations of the Vcc (e.g. built using self-timed circuit). The other one that involves a switched capacitor converter to supply power as a reasonably stable Vcc (could be a set of levels). In this latter case, it would be necessary also to invest some energy into powering control for the voltage regulator. In order to decide between these two organisations, one would need to carefully model both designs and characterise them in terms of energy utilisation and delivery of performance for the given computation demands. At present, there are no good ways for co-optimising power and computational electronics.

 

Research in this direction is in its infancy and this is only a tip of the iceberg. This talk will shed some light on how we are approaching the problem of power-data co-design at Newcastle, in a series of research projects producing novel types of sensors, ADCs, asynchronous controllers for power regulation, and software tools for designing “little digital” electronics.

[1] A. Yakovlev. Energy modulated computing. Proceedings of DATE, 2011, Grenoble,  doi: 10.1109/DATE.2011.5763216

My vision of Bio-inspired Electronic Design

I took part in a Panel on Bio-inspired Electronic Design Principles at the

Here are my slides

The quick summary of these ideas is here:

 

Summary of ideas for discussion from Alex Yakovlev, Newcastle University

 

With my 30 years of experience in designing and automating the design of self-timed (aka asynchronous) systems, I have been involved in studying and exploiting in practice the following characteristics of electronic systems:  inherent concurrency, event-driven and causality-based processing, parametric variation resilience, close-loop timing error avoidance and correction, energy-proportionality, digital and mixed-signal interfaces. More recently, I have been looking at new bio-inspired paradigms such as energy-modulated and power-adaptive computing, significance-driven approximate computing, real-power (to match real-time!) computing, computing with survival instincts, computing with central and peripheral powering and timing, power layering in systems architecting, exploiting burstiness and regularity of processing etc.

In most of these the central role belongs to the notion of energy flow as a key driving force in the new generation of microelectronics. I will therefore be approaching most of the Questions raised for the Panel from the energy flow perspective. The other strong aspect I want to address that acts as a drive for innovation in electronics is a combination of technological and economic factors, which is closely related to survival, both in the sense of longevity of a particular system as well as survival of design patterns and IPs as a longevity of the system as a kind or as a system design process.

My main tenets in this discussion are:

  • Compute where energy naturally flows.
  • Evolve (IPs, Designs) where biology (or nature as a whole) would evolve its parts (DNA, cells, cellular networks, organs).

I will also pose as one of the biggest challenges for semiconductor system the challenge of massive informational connectivity of parts at all levels of hierarchy, this is something that I hypothesize can only be addressed in hybrid cell-microelectronic systems. Information (and hence, data processing) flows should be commensurate to energy flows, only then we will be close to thermodynamic limits.

Alex Yakovlev

11.08.2016

 

Three more NEMIG talks

There have been three more very interesting talks in our Eletromagnetism Interest Group’s seminars.

All their recordings can be found here:

http://www.ncl.ac.uk/eee/research/interestgroups/nemig/

Professor Russell Cowburn
Cavendish Laboratory, University of Cambridge
IEEE distinguished Lecturer 2015

Most thin magnetic films have their magnetization lying in the plane of the film because of shape anisotropy.  In recent years there has been a resurgence of interest in thin magnetic films which exhibit a magnetization easy axis along the surface normal due to so-called Perpendicular Magnetic Anisotropy (PMA).  PMA has its origins in the symmetry breaking which occurs at surfaces and interfaces and can be strong enough to dominate the magnetic properties of some material systems.  In this talk I explain the physics of such materials and show how the magnetic properties associated with PMA are often very well suited to applications.  I show three different examples of real and potential applications of PMA materials: ultralow power STT-MRAM memory devices for green computing, 3-dimensional magnetic logic structures and a novel cancer therapy.

Prof. David Daniels CBE
Managing Director, Short Range Radar Systems Limited
Visiting Professor at University of Manchester

Ground penetrating radar (GPR) is an electromagnetic technique for the detection, recognition and identification of objects or interfaces buried beneath the earth’s surface or located within a visually opaque structure. GPR can be used for many applications ranging from geophysical prospecting, forensic investigation, utility inspection, landmine and IED detection and through wall radar for security applications.

The main challenge for GPR as an electromagnetic imaging method is that of an ill-posed problem. The physical environment is in many situations inhomogeneous and consequently both propagation parameters and reflector / target occupancy are spatially variable. Current imaging methods such as diffraction tomography, reverse time migration, range migration and back projection work when the propagation parameters are well described and stable and the target radar cross section is relatively simple. The future challenge for GPR is to develop robust methods of imaging that work in real world conditions with more demanding targets.

The seminar will introduce the principles of the technique, the basic propagation issues as well as time domain and frequency domain system and antenna design from the system engineer’s viewpoint. Various applications will be considered and the basic signal processing methods that are used will be introduced using examples of some signal and imaging processing methods. The seminar will briefly consider the future developments needed to improve the inherent capability of the technique.

Paul Sutcliffe is Professor of Mathematical Physics at Durham University

Abstract: Non-abelian Yang-Mills-Higgs gauge theories have classical solutions that describe magnetic monopoles. These are stable soliton solutions with no singularties, that have the same long-range electromagnetic fields as those of a Dirac monopole. There are also multi-monopole solutions that have surprising symmetries, including those of the platonic solids.

 

Two more exciting lectures on Electromagnetism

In the last two months we have had two fascinating lectures in our NEMIG series:

The Time Domain, Superposition, and How Electromagnetics Really Works – Dr. Hans Schantz – 14 November 2014

http://async.org.uk/Hans-Schantz.html

Twists & Turns of the Fascinating World of Electromagnetic Waves – Prof. Steve Foti – 12th December 2014

http://async.org.uk/SteveFoti.html

These links contain links to the abstracts and videos of these lectures, as well as the bios of the speakers.

 

On Quantisation and Discretisation of Electromagnetic Effects in Nature

Alex Yakovlev

10th October 2014

I think I have recently reached better understanding of the electromagnetics of physical objects according to Ivor Catt, David Walton, and … surprise, surprise … Oliver Heaviside!

I was interested in Catt and Walton’s derivations of the transients (whose envelopes are exponential or sine/cosine curves) as sums of series of steps. I have recently been re-visiting their EM book (Ivor Catt’s “Electromagnetics 1” – see http://www.ivorcatt.co.uk/em.htm ).
I am really keen to understand all this ‘mechanics’ better as it seems that I am gradually settling with the idea of the world being quantised by virtue of energy currents being trapped between some reflection points, and the continuous pictures of the transients are just the results of some step-wise processes.

I deliberately use word ‘quantised’ in the above because I tend to think that ‘quantisation’ and ‘discretisation’ are practically (in physical sense; mathematicians may argue of course because they may add some abstract notion to these terms) synonyms. I’ll try to explain my understanding below.

Let’s see what happens with the TEM as it works in a transmission line with reflection. We have a series of steps in voltage which eventually form an exponential envelope. If we examine these steps, they show discrete sections in time and amplitude. The values of time sections between these steps are determined by the finite and specific characteristics of the geometry of the transmission line and the properties of the (dielectric) medium. The value of the amplitude levels between these steps is determined by the electrical properties of the line and the power level of the source.
So, basically, these discrete values associated with the energy entrapment in the transmission line (TL) are determined by the inherent characteristics of the matter and the energetic stimulus.
If we stimulated the TL with periodic changes in the energy current, we could observe the periodic process with discretised values in those steps – the envelope of which could be a sequence of charging and discharging exponentials.
I suppose if we set up a transmission line (which is largely capacitive in the above) with an inductance, so we’ll have the LC oscillator; this would produce a periodic, similarly step-wise, discretised process whose envelope will be a sine wave.

Now, if we analyse such a system in its discretised (rather than enveloped) form, we, if we want, could produce some sort of histogram showing the distribution of how much time the object in which we trap energy current, spends in what level of amplitude (we could even assign specific energy levels). Now we can call such an object a “Quantum Object”. Why not? I guess the only difference between our “quantum object” and ones that Quantum Physicists are talking about would be purely mathematical. We know the object well and our characterisation of the discretised process is deterministic, but they don’t know their discretised process sufficiently well and so they put probabilities.

If the above makes any sense, may I then make some hypotheses?

We live in the world that has finite size objects of matter, however large or small they are. These objects have boundaries. The boundaries act as reflection points on the way of the energy current. Hence associated with these objects and boundaries we have entrapments of energy. These entrapments, due to reflections give rise to discretisation in time and level. The grains of our (discretised) matter can be quite small so the entrapments can be very small and we cannot easily measure these steps in their sequences, but rather characterise by some integrative measurements (accumulate and average them – like in luminescence), hence at some point we end up being probabilistic.

One more thing that bothers me is associated with the verticality of steps and their slopes.
Let’s look at the moment when we change the state of a reed-switch or pull up the line to Vdd or down GND. The time with which this transition takes place is also non-zero. I.e., even if the propagation of the change is with the speed of light, modulo the epsilon and mu of the medium, i.e. with finite time to destination, the transition of the voltage level must also be associated with some propagation of the field, or forces, inside the reed-switch or in the transistor, respectively, that pulls the line up or down. Clearly that time-frame is much smaller than the time frame of propagating the energy current in the medium along the transmission line, but still it is not zero. I presume that, quite recursively, we can look at the finer granularity of this stage change and see that it is itself a step-wise process of some reflections of the energy current in that small object, the switch, and what we see as a continuous slope is actually an envelope of the step-wise process.