Electromagnetic Compatibility event (EMC-COMPO’17) in St. Petersburg

A very interesting workshop was held in my Alma Mater (LETI – Electrotechnical Universrity) in Saint Petersburg, Russia on 4-8 July 2017.

https://emccompo2017.eltech.ru

The workshop contained lots of interesting presentations – largely from industry and largely on modelling and empirical measurements of the EM interference in microsystems and ICs. Basically, the problem of reuse and block replacement is huge due to the unpredictability of the EM effects between components on PCB and on chip.

Here are the presentations:

https://emccompo2017.eltech.ru/results/presentations

Milos Krstic (from IHP) and I gave a keynote talk, which consisted of two parts:

(1) Digital Systems Clocking with and without clock: a historical retrospective (emphasizing the role of researchers from LETI – mostly Victor Varshavsky’s group where I used to work in the 1980s)

http://www.eltech.ru/assets/files/en/emccompo-2017/presentations/25-Digital-Systems-Clocking-with-and-without-clock.pdf

(2) Main technical contribution: Reducing Switching Noise Effects by Advanced Clock Management: M. Krstic, X. Fan, M. Babic, E. Grass, T. Bjerregaard, A. Yakovlev

http://www.eltech.ru/assets/files/en/emccompo-2017/presentations/03-Reducing-Switching-Noise-Effects.pdf

 

Talking at the 2016 ARM Research Summit

Last week there was an inaugural ARM Research Summit.

https://developer.arm.com/research/summit

I gave a talk on Power & Compute Codesign for “Little Digital” Electronics.

Here are the slides of this talk:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/Power-and-Compute-Talk

Here is the abstract of my talk:

Power and Compute Codesign for “Little Digital” Electronics

Alex Yakovlev, Newcastle University

alex.yakovlev@ncl.ac.uk

The discipline of electronics and computing system design has traditionally separated power management (regulation, delivery, distribution) from data-processing (computation, storage, communication, user interface). Power control has always been a prerogative of power engineers who designed power supplies for loads that were typically defined in a relatively crude way.

 

In this talk, we take a different stance and address upcoming electronics systems (e.g. Internet of Things nodes) more holistically. Such systems are miniaturised to the level that both power management and data-processing are virtually inseparable in terms of their functionality and resources, and the latter are getting scarce. Increasingly, both elements share the same die, and the control of power supply, or what we call here a “little digital” organ, also shares the same silicon fabric as the power supply. At present, there are no systematic methods or tools for designing “little digital” that could ensure that it performs its duties correctly and efficiently.  The talk will explore the main issues involved in formulating the problem of and automating the design of little digital circuits, such as models of control circuits and the controlled plants, definition and description of control laws and optimisation criteria, characterisation of correctness and efficiency, and applications such as biomedical implants, IoT ‘things’ and WSN nodes.

 

Our particular focus in this talk will be on power-data convergence and ways of designing energy-modulated systems [1].  In such systems, the incoming flow of energy will largely determine the levels of switching activity, including data processing – this is fundamentally different from the conventional forms where the energy aspect simply acts as a cost function for optimal design or run-time performance.

 

We will soon be asking ourselves questions like these: For a given silicon area and given data processing functions, what is the best way to allocate silicon to power and computational elements? More specifically, for a given energy supply rate and given computation demands, which of the following system designs would be better? One that involves a capacitor network for storing energy, and investing energy into charging and discharging flying capacitors through computational electronics which would be able to sustain high fluctuations of the Vcc (e.g. built using self-timed circuit). The other one that involves a switched capacitor converter to supply power as a reasonably stable Vcc (could be a set of levels). In this latter case, it would be necessary also to invest some energy into powering control for the voltage regulator. In order to decide between these two organisations, one would need to carefully model both designs and characterise them in terms of energy utilisation and delivery of performance for the given computation demands. At present, there are no good ways for co-optimising power and computational electronics.

 

Research in this direction is in its infancy and this is only a tip of the iceberg. This talk will shed some light on how we are approaching the problem of power-data co-design at Newcastle, in a series of research projects producing novel types of sensors, ADCs, asynchronous controllers for power regulation, and software tools for designing “little digital” electronics.

[1] A. Yakovlev. Energy modulated computing. Proceedings of DATE, 2011, Grenoble,  doi: 10.1109/DATE.2011.5763216

My vision of Bio-inspired Electronic Design

I took part in a Panel on Bio-inspired Electronic Design Principles at the

Here are my slides

The quick summary of these ideas is here:

 

Summary of ideas for discussion from Alex Yakovlev, Newcastle University

 

With my 30 years of experience in designing and automating the design of self-timed (aka asynchronous) systems, I have been involved in studying and exploiting in practice the following characteristics of electronic systems:  inherent concurrency, event-driven and causality-based processing, parametric variation resilience, close-loop timing error avoidance and correction, energy-proportionality, digital and mixed-signal interfaces. More recently, I have been looking at new bio-inspired paradigms such as energy-modulated and power-adaptive computing, significance-driven approximate computing, real-power (to match real-time!) computing, computing with survival instincts, computing with central and peripheral powering and timing, power layering in systems architecting, exploiting burstiness and regularity of processing etc.

In most of these the central role belongs to the notion of energy flow as a key driving force in the new generation of microelectronics. I will therefore be approaching most of the Questions raised for the Panel from the energy flow perspective. The other strong aspect I want to address that acts as a drive for innovation in electronics is a combination of technological and economic factors, which is closely related to survival, both in the sense of longevity of a particular system as well as survival of design patterns and IPs as a longevity of the system as a kind or as a system design process.

My main tenets in this discussion are:

  • Compute where energy naturally flows.
  • Evolve (IPs, Designs) where biology (or nature as a whole) would evolve its parts (DNA, cells, cellular networks, organs).

I will also pose as one of the biggest challenges for semiconductor system the challenge of massive informational connectivity of parts at all levels of hierarchy, this is something that I hypothesize can only be addressed in hybrid cell-microelectronic systems. Information (and hence, data processing) flows should be commensurate to energy flows, only then we will be close to thermodynamic limits.

Alex Yakovlev

11.08.2016

 

Three more NEMIG talks

There have been three more very interesting talks in our Eletromagnetism Interest Group’s seminars.

All their recordings can be found here:

http://www.ncl.ac.uk/eee/research/interestgroups/nemig/

Professor Russell Cowburn
Cavendish Laboratory, University of Cambridge
IEEE distinguished Lecturer 2015

Most thin magnetic films have their magnetization lying in the plane of the film because of shape anisotropy.  In recent years there has been a resurgence of interest in thin magnetic films which exhibit a magnetization easy axis along the surface normal due to so-called Perpendicular Magnetic Anisotropy (PMA).  PMA has its origins in the symmetry breaking which occurs at surfaces and interfaces and can be strong enough to dominate the magnetic properties of some material systems.  In this talk I explain the physics of such materials and show how the magnetic properties associated with PMA are often very well suited to applications.  I show three different examples of real and potential applications of PMA materials: ultralow power STT-MRAM memory devices for green computing, 3-dimensional magnetic logic structures and a novel cancer therapy.

Prof. David Daniels CBE
Managing Director, Short Range Radar Systems Limited
Visiting Professor at University of Manchester

Ground penetrating radar (GPR) is an electromagnetic technique for the detection, recognition and identification of objects or interfaces buried beneath the earth’s surface or located within a visually opaque structure. GPR can be used for many applications ranging from geophysical prospecting, forensic investigation, utility inspection, landmine and IED detection and through wall radar for security applications.

The main challenge for GPR as an electromagnetic imaging method is that of an ill-posed problem. The physical environment is in many situations inhomogeneous and consequently both propagation parameters and reflector / target occupancy are spatially variable. Current imaging methods such as diffraction tomography, reverse time migration, range migration and back projection work when the propagation parameters are well described and stable and the target radar cross section is relatively simple. The future challenge for GPR is to develop robust methods of imaging that work in real world conditions with more demanding targets.

The seminar will introduce the principles of the technique, the basic propagation issues as well as time domain and frequency domain system and antenna design from the system engineer’s viewpoint. Various applications will be considered and the basic signal processing methods that are used will be introduced using examples of some signal and imaging processing methods. The seminar will briefly consider the future developments needed to improve the inherent capability of the technique.

Paul Sutcliffe is Professor of Mathematical Physics at Durham University

Abstract: Non-abelian Yang-Mills-Higgs gauge theories have classical solutions that describe magnetic monopoles. These are stable soliton solutions with no singularties, that have the same long-range electromagnetic fields as those of a Dirac monopole. There are also multi-monopole solutions that have surprising symmetries, including those of the platonic solids.

 

Two more exciting lectures on Electromagnetism

In the last two months we have had two fascinating lectures in our NEMIG series:

The Time Domain, Superposition, and How Electromagnetics Really Works – Dr. Hans Schantz – 14 November 2014

http://async.org.uk/Hans-Schantz.html

Twists & Turns of the Fascinating World of Electromagnetic Waves – Prof. Steve Foti – 12th December 2014

http://async.org.uk/SteveFoti.html

These links contain links to the abstracts and videos of these lectures, as well as the bios of the speakers.

 

On Quantisation and Discretisation of Electromagnetic Effects in Nature

Alex Yakovlev

10th October 2014

I think I have recently reached better understanding of the electromagnetics of physical objects according to Ivor Catt, David Walton, and … surprise, surprise … Oliver Heaviside!

I was interested in Catt and Walton’s derivations of the transients (whose envelopes are exponential or sine/cosine curves) as sums of series of steps. I have recently been re-visiting their EM book (Ivor Catt’s “Electromagnetics 1” – see http://www.ivorcatt.co.uk/em.htm ).
I am really keen to understand all this ‘mechanics’ better as it seems that I am gradually settling with the idea of the world being quantised by virtue of energy currents being trapped between some reflection points, and the continuous pictures of the transients are just the results of some step-wise processes.

I deliberately use word ‘quantised’ in the above because I tend to think that ‘quantisation’ and ‘discretisation’ are practically (in physical sense; mathematicians may argue of course because they may add some abstract notion to these terms) synonyms. I’ll try to explain my understanding below.

Let’s see what happens with the TEM as it works in a transmission line with reflection. We have a series of steps in voltage which eventually form an exponential envelope. If we examine these steps, they show discrete sections in time and amplitude. The values of time sections between these steps are determined by the finite and specific characteristics of the geometry of the transmission line and the properties of the (dielectric) medium. The value of the amplitude levels between these steps is determined by the electrical properties of the line and the power level of the source.
So, basically, these discrete values associated with the energy entrapment in the transmission line (TL) are determined by the inherent characteristics of the matter and the energetic stimulus.
If we stimulated the TL with periodic changes in the energy current, we could observe the periodic process with discretised values in those steps – the envelope of which could be a sequence of charging and discharging exponentials.
I suppose if we set up a transmission line (which is largely capacitive in the above) with an inductance, so we’ll have the LC oscillator; this would produce a periodic, similarly step-wise, discretised process whose envelope will be a sine wave.

Now, if we analyse such a system in its discretised (rather than enveloped) form, we, if we want, could produce some sort of histogram showing the distribution of how much time the object in which we trap energy current, spends in what level of amplitude (we could even assign specific energy levels). Now we can call such an object a “Quantum Object”. Why not? I guess the only difference between our “quantum object” and ones that Quantum Physicists are talking about would be purely mathematical. We know the object well and our characterisation of the discretised process is deterministic, but they don’t know their discretised process sufficiently well and so they put probabilities.

If the above makes any sense, may I then make some hypotheses?

We live in the world that has finite size objects of matter, however large or small they are. These objects have boundaries. The boundaries act as reflection points on the way of the energy current. Hence associated with these objects and boundaries we have entrapments of energy. These entrapments, due to reflections give rise to discretisation in time and level. The grains of our (discretised) matter can be quite small so the entrapments can be very small and we cannot easily measure these steps in their sequences, but rather characterise by some integrative measurements (accumulate and average them – like in luminescence), hence at some point we end up being probabilistic.

One more thing that bothers me is associated with the verticality of steps and their slopes.
Let’s look at the moment when we change the state of a reed-switch or pull up the line to Vdd or down GND. The time with which this transition takes place is also non-zero. I.e., even if the propagation of the change is with the speed of light, modulo the epsilon and mu of the medium, i.e. with finite time to destination, the transition of the voltage level must also be associated with some propagation of the field, or forces, inside the reed-switch or in the transistor, respectively, that pulls the line up or down. Clearly that time-frame is much smaller than the time frame of propagating the energy current in the medium along the transmission line, but still it is not zero. I presume that, quite recursively, we can look at the finer granularity of this stage change and see that it is itself a step-wise process of some reflections of the energy current in that small object, the switch, and what we see as a continuous slope is actually an envelope of the step-wise process.

NEMIG lecture by John Arthur on 150 Years of Maxwell’s Equations

http://www.ncl.ac.uk/eee/about/news/item/nemig-seminar-150-years-of-maxwell-s-equations

There will shortly be a video available on htt://async.org.uk of this lecture.

John Arthur’s book on Understanding Algebraic Geometry for Electromagnetic Theory:

http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470941634.html

14:00 – 15:00, 22nd September 2014, Room: M4.13 (CPD Room) Merz Court

Dr John Arthur FRSE FREng FinstP FIET SMIEE Trustee, James Clerk Maxwell Foundation

The talk will start with who James Clerk Maxwell was and the origins of his electromagnetic equations. It will show some of the difficulties he had with the mathematics of the day and how, from there, the original cumbersome equations were gradually shaped into the set most commonly seen today. But even now there is confusion and misunderstanding about some of the key concepts, such as questions concerning the roles and relative usefulness of D and E and, even more so, B and H. We therefore set about clarifying the essential fields.  As to the usefulness, or otherwise, of magnetic poles as a concept, we investigate an alternative view of magnetism which leads to the idea of replacing two separate electromagnetic fields with just one composite field. This requires a change to the treatment of Maxwell’s equations which has some surprising benefits, as will be demonstrated, ending up with how they reduce to just one very simple equation.

Speaker’s biography: Although born and bred in Edinburgh, John Arthur took a degree in physics and mathematics at the University of Toronto before returning to complete a PhD in physics from the University of Edinburgh where he then worked for several years in post-doctoral research. Thereafter he moved from academia to industry, where he spent the greater part of his career specialising in high technology developments for communications and radar in areas covering signal processing, surface acoustic waves, microwaves and electronics. He has published a number of articles, and more recently a recent book, on electromagnetic theory. Since 2012 he has been a trustee of the James Clerk Maxwell Foundation based at 14 India Street, Edinburgh, the birthplace of James Clerk Maxwell.

Heaviside memorial

The unveiling ceremony was held on Saturday 30th August 2014 at 3pm in Paignton Cemetery. It was attended by the Mayor of Torbay, the MP for Torbay, an ex-curator of the Science Museum (representing the Institution of Engineering and Technology), the Chairman of the Torbay Civic Society, delegates from Newcastle University, representatives from Allwood and Sons the monument restorers and members of the general public. Most importantly, the ceremony was honored by the attendance of a relative of Oliver Heaviside , Alan Heather (Oliver Heaviside’s first cousin three times removed) and his wife.

http://www.torquayheraldexpress.co.uk/Restored-Heaviside-memorial-unveiled-Saturday/story-22858873-detail/story.html

At this ceremony I emphasized the fact that Heaviside who was an electrical engineer at the start of his professional life, with his work that originated in solving practical engineering problems (e.g. telegraphy and telephony), made an unprecedented impact on fundamental disciplines – mathematics and physics. This fact should be seen by many students and researchers, as well as engineers, as an inspiration to the creative process in science. Unlike the accepted “causal path”, which people often associate with applying basic science to engineering problems, the truly innovative causal path is actually reverse. On this path, one would start with the engineering problem, find a practically working solution – very often engineering intuition helps here – and then “invent” the mathematics and physics to describe the solution as a phenomenon. Heaviside’s whole life has been the following of this path, which pretty well epitomizes his famous saying “We reverse this; the current in the wire is set up by the energy transmitted through the medium around it.” (”Electrical Papers” Vol. 1, page 438, by Oliver Heaviside.). Here the engineering method acts as an driving energy and the product of this method, the scientific method, is like a current in the wire.

I am sure that Heaviside is a brilliant example that we should tell our students about when attracting them into (electrical and electronic) engineering – where they can make impact on fundamental sciences without actually being professional mathematicians or physicists. They need to be creative and imaginative!

 

New Book on Modelling Concurrent Systems using Petri nets

New book has been published in Saint Petersburg, by Professional Literature.

Marakhovsky VB, Rozenblyum LYa, Yakovlev AV. Моделирование параллельных процессов. Сети Петри. (The book is in Russian language)(Modelling Concurrent Processes. Petri nets) . Saint Petersburg: СПб: Профессиональная литература, АйТи-Подготовка (Professional Literature, www.profliteratura.ru), 2014.

marahovsky2014

 

The book is in the Series of Selected Titles in Computer Science. It presents a course for Systems Architects and Programmers, Systems Analysts and Designers of Complex Control Systems.

Practically any more or less complex information or control system has components that operate concurrently, in other words in parallel. This book presents methods for formal dynamical modelling of parallel asynchronous processes. Such processes can be found in various application areas, such as computations, control, interfaces, programming, robotics or artificial intelligence.

It is emphasized in this book that there is an important relationship between a structural model, which reflects static properties of the modelled system, and its dynamic (behavioural) model. This two-pronged fundamental approach is suitable at all stages of system design – specification, analysis, implementation and verification.

The book has numerous examples and exercises, which makes it a good supporting text for courses in various syllabi involving modelling information and control systems.

 

 

 

Chris Spargo’s talk on Heaviside at NEMIG seminar on August 4, 2014

NEMIG Seminar: Oliver Heaviside FRS: The Man, His Work and Memorial Project

13:45 – 14:30, 4th August 2014, Room: M4.13 (CPD Room) Merz Court

A talk followed by discussion session will be held by NEMIG co-founder Christopher Spargo.

The talk will briefly outline Heaviside’s life from childhood until his death, his friendships, relations and recognitions. The talk will then proceed to discuss some of his major achievements such as the reformulation of Maxwell’s equations into today’s known form, the problem of Victorian telegraphy and Heaviside’s solution due to the development of his distributed transmission line model through to some little known aspects of his work but which have major impact. The talk will conclude with an introduction to the Heaviside Memorial Project of which the speaker is the founder and project director. Questions and discussion amongst the group afterwards are most welcome.