PN’2015 Advanced Tutorial: Modeling, Synthesis and Verification of Hardware

We are giving an Advanced Tutorial: Modeling, Synthesis and Verification of Hardware on Tuesday 23rd June at the Petri nets 2015 Conference in Brussels.

The agenda of the tutorial and directions to the venue can be found here:

Everyone is welcome!


Our talks at ASYNC 2015 in Mountain View, Silicon Valley

We gave two talks on our papers accepted for ASYNC 2015:

  • Design and Verification of Speed-Independent Multiphase Buck Controller    [ Slides]
    Danil Sokolov, Victor Khomenko, Andrey Mokhov, Alex Yakovlev, and David Lloyd
  • Opportunistic Merge Element    [ Slides ]
    Andrey Mokhov, Victor Khomenko, Danil Sokolov, and Alex Yakovlev

Both emerged from our project A4A (Async for Analogue)

Three more NEMIG talks

There have been three more very interesting talks in our Eletromagnetism Interest Group’s seminars.

All their recordings can be found here:

Professor Russell Cowburn
Cavendish Laboratory, University of Cambridge
IEEE distinguished Lecturer 2015

Most thin magnetic films have their magnetization lying in the plane of the film because of shape anisotropy.  In recent years there has been a resurgence of interest in thin magnetic films which exhibit a magnetization easy axis along the surface normal due to so-called Perpendicular Magnetic Anisotropy (PMA).  PMA has its origins in the symmetry breaking which occurs at surfaces and interfaces and can be strong enough to dominate the magnetic properties of some material systems.  In this talk I explain the physics of such materials and show how the magnetic properties associated with PMA are often very well suited to applications.  I show three different examples of real and potential applications of PMA materials: ultralow power STT-MRAM memory devices for green computing, 3-dimensional magnetic logic structures and a novel cancer therapy.

Prof. David Daniels CBE
Managing Director, Short Range Radar Systems Limited
Visiting Professor at University of Manchester

Ground penetrating radar (GPR) is an electromagnetic technique for the detection, recognition and identification of objects or interfaces buried beneath the earth’s surface or located within a visually opaque structure. GPR can be used for many applications ranging from geophysical prospecting, forensic investigation, utility inspection, landmine and IED detection and through wall radar for security applications.

The main challenge for GPR as an electromagnetic imaging method is that of an ill-posed problem. The physical environment is in many situations inhomogeneous and consequently both propagation parameters and reflector / target occupancy are spatially variable. Current imaging methods such as diffraction tomography, reverse time migration, range migration and back projection work when the propagation parameters are well described and stable and the target radar cross section is relatively simple. The future challenge for GPR is to develop robust methods of imaging that work in real world conditions with more demanding targets.

The seminar will introduce the principles of the technique, the basic propagation issues as well as time domain and frequency domain system and antenna design from the system engineer’s viewpoint. Various applications will be considered and the basic signal processing methods that are used will be introduced using examples of some signal and imaging processing methods. The seminar will briefly consider the future developments needed to improve the inherent capability of the technique.

Paul Sutcliffe is Professor of Mathematical Physics at Durham University

Abstract: Non-abelian Yang-Mills-Higgs gauge theories have classical solutions that describe magnetic monopoles. These are stable soliton solutions with no singularties, that have the same long-range electromagnetic fields as those of a Dirac monopole. There are also multi-monopole solutions that have surprising symmetries, including those of the platonic solids.


My Keynote “Putting Computing on a Strict Diet with Energy-Proportionality”

I gave a keynote talk on “Putting Computing on a Strict Diet with Energy-Proportionality” at  the XXIX Conference on Design of Circuits and Integrated Systems, held in Madrid on 26-28th November 2014.

The abstract of the talk can be found in the conference programme:

The slides of the talk can be found here:


Two more exciting lectures on Electromagnetism

In the last two months we have had two fascinating lectures in our NEMIG series:

The Time Domain, Superposition, and How Electromagnetics Really Works – Dr. Hans Schantz – 14 November 2014

Twists & Turns of the Fascinating World of Electromagnetic Waves – Prof. Steve Foti – 12th December 2014

These links contain links to the abstracts and videos of these lectures, as well as the bios of the speakers.


On Quantisation and Discretisation of Electromagnetic Effects in Nature

Alex Yakovlev

10th October 2014

I think I have recently reached better understanding of the electromagnetics of physical objects according to Ivor Catt, David Walton, and … surprise, surprise … Oliver Heaviside!

I was interested in Catt and Walton’s derivations of the transients (whose envelopes are exponential or sine/cosine curves) as sums of series of steps. I have recently been re-visiting their EM book (Ivor Catt’s “Electromagnetics 1” – see ).
I am really keen to understand all this ‘mechanics’ better as it seems that I am gradually settling with the idea of the world being quantised by virtue of energy currents being trapped between some reflection points, and the continuous pictures of the transients are just the results of some step-wise processes.

I deliberately use word ‘quantised’ in the above because I tend to think that ‘quantisation’ and ‘discretisation’ are practically (in physical sense; mathematicians may argue of course because they may add some abstract notion to these terms) synonyms. I’ll try to explain my understanding below.

Let’s see what happens with the TEM as it works in a transmission line with reflection. We have a series of steps in voltage which eventually form an exponential envelope. If we examine these steps, they show discrete sections in time and amplitude. The values of time sections between these steps are determined by the finite and specific characteristics of the geometry of the transmission line and the properties of the (dielectric) medium. The value of the amplitude levels between these steps is determined by the electrical properties of the line and the power level of the source.
So, basically, these discrete values associated with the energy entrapment in the transmission line (TL) are determined by the inherent characteristics of the matter and the energetic stimulus.
If we stimulated the TL with periodic changes in the energy current, we could observe the periodic process with discretised values in those steps – the envelope of which could be a sequence of charging and discharging exponentials.
I suppose if we set up a transmission line (which is largely capacitive in the above) with an inductance, so we’ll have the LC oscillator; this would produce a periodic, similarly step-wise, discretised process whose envelope will be a sine wave.

Now, if we analyse such a system in its discretised (rather than enveloped) form, we, if we want, could produce some sort of histogram showing the distribution of how much time the object in which we trap energy current, spends in what level of amplitude (we could even assign specific energy levels). Now we can call such an object a “Quantum Object”. Why not? I guess the only difference between our “quantum object” and ones that Quantum Physicists are talking about would be purely mathematical. We know the object well and our characterisation of the discretised process is deterministic, but they don’t know their discretised process sufficiently well and so they put probabilities.

If the above makes any sense, may I then make some hypotheses?

We live in the world that has finite size objects of matter, however large or small they are. These objects have boundaries. The boundaries act as reflection points on the way of the energy current. Hence associated with these objects and boundaries we have entrapments of energy. These entrapments, due to reflections give rise to discretisation in time and level. The grains of our (discretised) matter can be quite small so the entrapments can be very small and we cannot easily measure these steps in their sequences, but rather characterise by some integrative measurements (accumulate and average them – like in luminescence), hence at some point we end up being probabilistic.

One more thing that bothers me is associated with the verticality of steps and their slopes.
Let’s look at the moment when we change the state of a reed-switch or pull up the line to Vdd or down GND. The time with which this transition takes place is also non-zero. I.e., even if the propagation of the change is with the speed of light, modulo the epsilon and mu of the medium, i.e. with finite time to destination, the transition of the voltage level must also be associated with some propagation of the field, or forces, inside the reed-switch or in the transistor, respectively, that pulls the line up or down. Clearly that time-frame is much smaller than the time frame of propagating the energy current in the medium along the transmission line, but still it is not zero. I presume that, quite recursively, we can look at the finer granularity of this stage change and see that it is itself a step-wise process of some reflections of the energy current in that small object, the switch, and what we see as a continuous slope is actually an envelope of the step-wise process.

NEMIG lecture by John Arthur on 150 Years of Maxwell’s Equations

There will shortly be a video available on htt:// of this lecture.

John Arthur’s book on Understanding Algebraic Geometry for Electromagnetic Theory:

14:00 – 15:00, 22nd September 2014, Room: M4.13 (CPD Room) Merz Court

Dr John Arthur FRSE FREng FinstP FIET SMIEE Trustee, James Clerk Maxwell Foundation

The talk will start with who James Clerk Maxwell was and the origins of his electromagnetic equations. It will show some of the difficulties he had with the mathematics of the day and how, from there, the original cumbersome equations were gradually shaped into the set most commonly seen today. But even now there is confusion and misunderstanding about some of the key concepts, such as questions concerning the roles and relative usefulness of D and E and, even more so, B and H. We therefore set about clarifying the essential fields.  As to the usefulness, or otherwise, of magnetic poles as a concept, we investigate an alternative view of magnetism which leads to the idea of replacing two separate electromagnetic fields with just one composite field. This requires a change to the treatment of Maxwell’s equations which has some surprising benefits, as will be demonstrated, ending up with how they reduce to just one very simple equation.

Speaker’s biography: Although born and bred in Edinburgh, John Arthur took a degree in physics and mathematics at the University of Toronto before returning to complete a PhD in physics from the University of Edinburgh where he then worked for several years in post-doctoral research. Thereafter he moved from academia to industry, where he spent the greater part of his career specialising in high technology developments for communications and radar in areas covering signal processing, surface acoustic waves, microwaves and electronics. He has published a number of articles, and more recently a recent book, on electromagnetic theory. Since 2012 he has been a trustee of the James Clerk Maxwell Foundation based at 14 India Street, Edinburgh, the birthplace of James Clerk Maxwell.