Ultra-ultra-wide-band Electro-Magnetic computing

I envisage a ‘mothball computer’ – a capsule with the case whose outer surface harvests power from the environment and inside the capsule we have the computational electronics.

High-speed clocking can be provided by EM of highest possible frequency – e.g. by visible light, X-rays or ultimately by gamma rays!

Power supply for modulation electronics can be generated by solar cells – Perovskite cells. Because Perovskite cell have lead in them they can insulate gamma rays from propagation outside the compute capsule.

Information will be in the form of time-modulated super-HF signals.

We will represent information in terms of time-averaged pulse bursts.

We will have a ‘continuum’ range of temporal compute which will operate in the range between deterministic one-shot pulse burst (discrete) through deterministic multi-pulse analog averaged signal to stochastic multi-pulse averaged signal (cf. book by Mars & Poppelbaum – https://www.amazon.co.uk/Stochastic-Deterministic-Averaging-Processes-electronics/dp/0906048443)

Temporal Computing (https://temporalcomputing.com) is the right kind of business opportunity for this Odyssey!

On “Quantum LC circuit paradox”

One of my younger friends and co-authors, Alex Kushnerov, has just pointed out to me the following statement:

“So, there are no electric or magnetic charges in the quantum LC circuit, but electric and magnetic fluxes only….”

It is made on the following website:

https://en.m.wikipedia.org/wiki/Quantum_LC_circuit

It seems that for ‘classical theorists’ in EM and Quantum Mechanics, this effect forms a paradox, which they call “Quantum LC circuit paradox”.

Presumably, if they started with energy current in the first place, which has nothing to do with charges or currents, and then simply capture energy current in spatial forms, that manifest themselves as “capacitors” or “inductors”, they would quantize it quite comfortably in a normal deterministic and causal sense. Thus they would have the effects of LC without any necessity to go to special ‘quantum LC’.

I wrote about these ideas in my Royal Society Phil Trans paper:

Energy current and computing

And in my earlier blogs …, e.g. https://blogs.ncl.ac.uk/alexyakovlev/2014/10/

And, most importantly, that’s what Ivor Catt and his Catt Theory of EM have been trying to tell the rest of the world for more than half a century:

http://www.ivorcatt.co.uk

Bridging Async and Analog at ASYNC 2018 and FAC 2018 in Vienna

I attended ASYNC 2018 and FAC 2018 in Vienna in May. It was the first time these two event were collocated back to back, with FAC (Frontiers of Analog CAD) to follow ASYNC.

See http://www.async2018.wien/

I gave an invited ‘bridging’ keynote “Async-Analog: Happy Cross-talking?”.

Here are the slides in pdf:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/ASYNC18-FAC18-keynote-AY-last.pdf

 

 

 

Energy-vector, momentum, causality, Energy-scalar …

Some more interesting discussions with Ed Dellian has resulted in this ‘summary’, made in context with my current level of understanding of Catt Theory of electromagnetism):

  1. Energy current (E-vector) causes momentum p.
  2. Causality is made via the proportionality coefficient c (speed of energy current)
  3. Momentum p is what mediates between E-vector and changes in the matter.
  4. Momentum p is preserved as energy current hits the matter.
  5. Momentum in the matter presents another form of energy (E-scalar).
  6. E-scalar characterises the elements of the matter as they move with a (material) velocity.
  7. As elements of the matter move they cause changes in Energy current (E-vector) and this forms a fundamental feedback mechanism (which is recursive/fractal …).

Telling this in terms of EM theory and electricity:

  • E-vector (Poynting vector aka Heaviside signal) causes E-scalar (electric current in the matter).
  • This causality between E-vector and E-scalar is mediated by momentum p causing the motion of charges.
  • The motion of charges with material velocity causes changes in E-vector, i.e. the feedback effect mentioned above (e.g. self-induction)

I’d be most grateful if someone refutes these items and bullets.

I also recommend to read my blog (from 2014) on discretisation

On Quantisation and Discretisation of Electromagnetic Effects in Nature

Real Nature’s proportionality is geometric: Newton’s causality

I recently enjoyed e-mail exchanges with Ed Dellian.

Ed is one of the very few modern philosophers and science historians who read Newton’s Principia in original (and produced his own translation of Principia to German – published in 1988).

Ed’s position is that the real physical (Nature’s) laws reflect cause and effect in the form of geometric proportionality. The most fundamental being E/p=c, where E is energy, p is momentum and c is velocity – a proportionality coefficient, i.e. a constant associated with space over time.  This view is in line with the Poynting vector understanding of electromagnetism, also accepted by Heaviside in his notion of ‘energy current’. It even is the basis of Einstein’s E/mc = c.

The diversion from geometric proportionality towards arithmetic proportionality was due to Leibniz and his principle of “causa aequat effectum“. According to Ed (I am quoting him here)  – “it is a principle that has nothing to do with reality, since it implies “instantanity” of interaction, that is, interaction independently of “real space” and “real time”, conflicting with the age-old natural experience expressed by Galileo that “nothing happens but in space and time” “. It is therefore important to see how Maxwellian electromagnetism is seen by scholars. For example, Faraday’s law states an equivalence of EMF and the rate of change of magnetic flux – it is not a geometric proportion, hence it is not causal!

My view, which is based on my experience with electronic circuits and my understanding of causality between and energy and information transfer (state-changes), where energy is cause and information transfer is effect, is in agreement with geometric proportionality. Energy causes state-transitions in space-time. This is what I call energy-modulated computing. It is challenging to refine this proportionality in every real problem case!

If you want to know more about Ed Dellian’s views, I recommend visiting his site http://www.neutonus-reformatus.de  which contains several interesting papers.

 

 

 

 

A causes B – what does it mean?

There is a debatable issue that concerns the presence of causality in the interpretation of some physical relationships, such as those involved in electromagnetism. For example, “the dynamic change in magnetic field H causes the emergence of electric field E”. This is a common interpretation of one of the key Maxwell’s equations (originating in Faraday’s law). What does this “causes” mean? Is the meaning purely mathematical or is it more fundamental, or physical?

First of all, any man-made statements about real world phenomena are not strictly speaking physical, because they are formulated by humans within their perceptions, or, whether we want it or not, models, of the real world. So, even if we use English to express our perceptions we already depart from the “real physics”. Mathematics is just a man-made form of expression that is underpinned by some mathematical rigour.

Now let’s get back to the interpretation of the “causes” (or causality) relations. It is often synonymized  with the “gives rise” relation. Such relations present a lot of confusion if they originate from the interpretation of mathematical equations. For example, Faraday’s law in mathematical form, curl (E) = – dB/dt,  does not say anything about the RHS causing or giving rise to the LHS. (Recall that B is proportional to H with the permeability of the medium being the coefficient of proportionality.)

The interpretation problem, when taken outside pure mathematics leads to the question, for example, of HOW QUICKLY the RHS causes the LHS? And, here we have no firm answer. The question of “how quickly does the cause have an effect” is very much physical (yet neither Faraday nor Maxwell state anything about it!), because we are used to think that if A causes B, then we imply some temporal precedence between the event associated with A and the event associated with B. We also know that it is unlikely that this ‘causal precedence’ will effect faster than the speed of light (we haven’t seen any other evidence of information signalling acting faster than the speed of light!). Hence, the causality with the speed of light is something that may be the result of our causal interpretation. But, then this is probably wrong to assume that Faraday or Maxwell gave this sort of interpretation to the above relationship.

Worth thinking about causality, isn’t it?

I have no clear answer, but in my opinion, reading the original materials on electromagnetic theory, such as Heaviside’s volumes, rather than modern textbooks would be a good recipe!

I recommend anyone interested in this debatable matter check out Ivor Catt’s view on it:

http://www.ivorcatt.co.uk/x18j73.pdf

http://www.ivorcatt.co.uk/x18j184.pdf

To the best of my knowledge, Catt was the first to have noticed and wrote about the fact that modern texts on electromagnetism actively use the ’causes’ interpretation of Maxwell’s equations. He also claims that such equations are “obvious truisms about any body or material moving in space”.  The debatable matter may then start to move from the question of the legitimacy of the causal interpretation of these equations towards the question of how useful these equations are for actual understanding of electromagnetism …

 

 

 

 

Electromagnetic Compatibility event (EMC-COMPO’17) in St. Petersburg

A very interesting workshop was held in my Alma Mater (LETI – Electrotechnical Universrity) in Saint Petersburg, Russia on 4-8 July 2017.

https://emccompo2017.eltech.ru

The workshop contained lots of interesting presentations – largely from industry and largely on modelling and empirical measurements of the EM interference in microsystems and ICs. Basically, the problem of reuse and block replacement is huge due to the unpredictability of the EM effects between components on PCB and on chip.

Here are the presentations:

https://emccompo2017.eltech.ru/results/presentations

Milos Krstic (from IHP) and I gave a keynote talk, which consisted of two parts:

(1) Digital Systems Clocking with and without clock: a historical retrospective (emphasizing the role of researchers from LETI – mostly Victor Varshavsky’s group where I used to work in the 1980s)

http://www.eltech.ru/assets/files/en/emccompo-2017/presentations/25-Digital-Systems-Clocking-with-and-without-clock.pdf

(2) Main technical contribution: Reducing Switching Noise Effects by Advanced Clock Management: M. Krstic, X. Fan, M. Babic, E. Grass, T. Bjerregaard, A. Yakovlev

http://www.eltech.ru/assets/files/en/emccompo-2017/presentations/03-Reducing-Switching-Noise-Effects.pdf

 

Talking at the 2016 ARM Research Summit

Last week there was an inaugural ARM Research Summit.

https://developer.arm.com/research/summit

I gave a talk on Power & Compute Codesign for “Little Digital” Electronics.

Here are the slides of this talk:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/Power-and-Compute-Talk

Here is the abstract of my talk:

Power and Compute Codesign for “Little Digital” Electronics

Alex Yakovlev, Newcastle University

alex.yakovlev@ncl.ac.uk

The discipline of electronics and computing system design has traditionally separated power management (regulation, delivery, distribution) from data-processing (computation, storage, communication, user interface). Power control has always been a prerogative of power engineers who designed power supplies for loads that were typically defined in a relatively crude way.

 

In this talk, we take a different stance and address upcoming electronics systems (e.g. Internet of Things nodes) more holistically. Such systems are miniaturised to the level that both power management and data-processing are virtually inseparable in terms of their functionality and resources, and the latter are getting scarce. Increasingly, both elements share the same die, and the control of power supply, or what we call here a “little digital” organ, also shares the same silicon fabric as the power supply. At present, there are no systematic methods or tools for designing “little digital” that could ensure that it performs its duties correctly and efficiently.  The talk will explore the main issues involved in formulating the problem of and automating the design of little digital circuits, such as models of control circuits and the controlled plants, definition and description of control laws and optimisation criteria, characterisation of correctness and efficiency, and applications such as biomedical implants, IoT ‘things’ and WSN nodes.

 

Our particular focus in this talk will be on power-data convergence and ways of designing energy-modulated systems [1].  In such systems, the incoming flow of energy will largely determine the levels of switching activity, including data processing – this is fundamentally different from the conventional forms where the energy aspect simply acts as a cost function for optimal design or run-time performance.

 

We will soon be asking ourselves questions like these: For a given silicon area and given data processing functions, what is the best way to allocate silicon to power and computational elements? More specifically, for a given energy supply rate and given computation demands, which of the following system designs would be better? One that involves a capacitor network for storing energy, and investing energy into charging and discharging flying capacitors through computational electronics which would be able to sustain high fluctuations of the Vcc (e.g. built using self-timed circuit). The other one that involves a switched capacitor converter to supply power as a reasonably stable Vcc (could be a set of levels). In this latter case, it would be necessary also to invest some energy into powering control for the voltage regulator. In order to decide between these two organisations, one would need to carefully model both designs and characterise them in terms of energy utilisation and delivery of performance for the given computation demands. At present, there are no good ways for co-optimising power and computational electronics.

 

Research in this direction is in its infancy and this is only a tip of the iceberg. This talk will shed some light on how we are approaching the problem of power-data co-design at Newcastle, in a series of research projects producing novel types of sensors, ADCs, asynchronous controllers for power regulation, and software tools for designing “little digital” electronics.

[1] A. Yakovlev. Energy modulated computing. Proceedings of DATE, 2011, Grenoble,  doi: 10.1109/DATE.2011.5763216

My vision of Bio-inspired Electronic Design

I took part in a Panel on Bio-inspired Electronic Design Principles at the

Here are my slides

The quick summary of these ideas is here:

 

Summary of ideas for discussion from Alex Yakovlev, Newcastle University

 

With my 30 years of experience in designing and automating the design of self-timed (aka asynchronous) systems, I have been involved in studying and exploiting in practice the following characteristics of electronic systems:  inherent concurrency, event-driven and causality-based processing, parametric variation resilience, close-loop timing error avoidance and correction, energy-proportionality, digital and mixed-signal interfaces. More recently, I have been looking at new bio-inspired paradigms such as energy-modulated and power-adaptive computing, significance-driven approximate computing, real-power (to match real-time!) computing, computing with survival instincts, computing with central and peripheral powering and timing, power layering in systems architecting, exploiting burstiness and regularity of processing etc.

In most of these the central role belongs to the notion of energy flow as a key driving force in the new generation of microelectronics. I will therefore be approaching most of the Questions raised for the Panel from the energy flow perspective. The other strong aspect I want to address that acts as a drive for innovation in electronics is a combination of technological and economic factors, which is closely related to survival, both in the sense of longevity of a particular system as well as survival of design patterns and IPs as a longevity of the system as a kind or as a system design process.

My main tenets in this discussion are:

  • Compute where energy naturally flows.
  • Evolve (IPs, Designs) where biology (or nature as a whole) would evolve its parts (DNA, cells, cellular networks, organs).

I will also pose as one of the biggest challenges for semiconductor system the challenge of massive informational connectivity of parts at all levels of hierarchy, this is something that I hypothesize can only be addressed in hybrid cell-microelectronic systems. Information (and hence, data processing) flows should be commensurate to energy flows, only then we will be close to thermodynamic limits.

Alex Yakovlev

11.08.2016

 

Three more NEMIG talks

There have been three more very interesting talks in our Eletromagnetism Interest Group’s seminars.

All their recordings can be found here:

http://www.ncl.ac.uk/eee/research/interestgroups/nemig/

Professor Russell Cowburn
Cavendish Laboratory, University of Cambridge
IEEE distinguished Lecturer 2015

Most thin magnetic films have their magnetization lying in the plane of the film because of shape anisotropy.  In recent years there has been a resurgence of interest in thin magnetic films which exhibit a magnetization easy axis along the surface normal due to so-called Perpendicular Magnetic Anisotropy (PMA).  PMA has its origins in the symmetry breaking which occurs at surfaces and interfaces and can be strong enough to dominate the magnetic properties of some material systems.  In this talk I explain the physics of such materials and show how the magnetic properties associated with PMA are often very well suited to applications.  I show three different examples of real and potential applications of PMA materials: ultralow power STT-MRAM memory devices for green computing, 3-dimensional magnetic logic structures and a novel cancer therapy.

Prof. David Daniels CBE
Managing Director, Short Range Radar Systems Limited
Visiting Professor at University of Manchester

Ground penetrating radar (GPR) is an electromagnetic technique for the detection, recognition and identification of objects or interfaces buried beneath the earth’s surface or located within a visually opaque structure. GPR can be used for many applications ranging from geophysical prospecting, forensic investigation, utility inspection, landmine and IED detection and through wall radar for security applications.

The main challenge for GPR as an electromagnetic imaging method is that of an ill-posed problem. The physical environment is in many situations inhomogeneous and consequently both propagation parameters and reflector / target occupancy are spatially variable. Current imaging methods such as diffraction tomography, reverse time migration, range migration and back projection work when the propagation parameters are well described and stable and the target radar cross section is relatively simple. The future challenge for GPR is to develop robust methods of imaging that work in real world conditions with more demanding targets.

The seminar will introduce the principles of the technique, the basic propagation issues as well as time domain and frequency domain system and antenna design from the system engineer’s viewpoint. Various applications will be considered and the basic signal processing methods that are used will be introduced using examples of some signal and imaging processing methods. The seminar will briefly consider the future developments needed to improve the inherent capability of the technique.

Paul Sutcliffe is Professor of Mathematical Physics at Durham University

Abstract: Non-abelian Yang-Mills-Higgs gauge theories have classical solutions that describe magnetic monopoles. These are stable soliton solutions with no singularties, that have the same long-range electromagnetic fields as those of a Dirac monopole. There are also multi-monopole solutions that have surprising symmetries, including those of the platonic solids.