PN’2015 Advanced Tutorial: Modeling, Synthesis and Verification of Hardware

We are giving an Advanced Tutorial: Modeling, Synthesis and Verification of Hardware on Tuesday 23rd June at the Petri nets 2015 Conference in Brussels.

The agenda of the tutorial and directions to the venue can be found here:

http://www.ulb.ac.be/di/verif/pn2015acsd2015/satellite.html#Tuto2

Everyone is welcome!

 

Our talks at ASYNC 2015 in Mountain View, Silicon Valley

We gave two talks on our papers accepted for ASYNC 2015:

http://ee.usc.edu/async2015/

  • Design and Verification of Speed-Independent Multiphase Buck Controller    [ Slides]
    Danil Sokolov, Victor Khomenko, Andrey Mokhov, Alex Yakovlev, and David Lloyd
  • Opportunistic Merge Element    [ Slides ]
    Andrey Mokhov, Victor Khomenko, Danil Sokolov, and Alex Yakovlev

Both emerged from our project A4A (Async for Analogue)

My Keynote “Putting Computing on a Strict Diet with Energy-Proportionality”

I gave a keynote talk on “Putting Computing on a Strict Diet with Energy-Proportionality” at  the XXIX Conference on Design of Circuits and Integrated Systems, held in Madrid on 26-28th November 2014.

The abstract of the talk can be found in the conference programme:

http://www.cei.upm.es/dcis/wp-content/uploads/2014/10/DCIS_2014_program.pdf

The slides of the talk can be found here:

http://async.org.uk/Alex.Yakovlev/Yakovlev-DCIS2014-Keynote-final.pdf

 

Two more exciting lectures on Electromagnetism

In the last two months we have had two fascinating lectures in our NEMIG series:

The Time Domain, Superposition, and How Electromagnetics Really Works – Dr. Hans Schantz – 14 November 2014

http://async.org.uk/Hans-Schantz.html

Twists & Turns of the Fascinating World of Electromagnetic Waves – Prof. Steve Foti – 12th December 2014

http://async.org.uk/SteveFoti.html

These links contain links to the abstracts and videos of these lectures, as well as the bios of the speakers.

 

On Quantisation and Discretisation of Electromagnetic Effects in Nature

Alex Yakovlev

10th October 2014

I think I have recently reached better understanding of the electromagnetics of physical objects according to Ivor Catt, David Walton, and … surprise, surprise … Oliver Heaviside!

I was interested in Catt and Walton’s derivations of the transients (whose envelopes are exponential or sine/cosine curves) as sums of series of steps. I have recently been re-visiting their EM book (Ivor Catt’s “Electromagnetics 1” – see http://www.ivorcatt.co.uk/em.htm ).
I am really keen to understand all this ‘mechanics’ better as it seems that I am gradually settling with the idea of the world being quantised by virtue of energy currents being trapped between some reflection points, and the continuous pictures of the transients are just the results of some step-wise processes.

I deliberately use word ‘quantised’ in the above because I tend to think that ‘quantisation’ and ‘discretisation’ are practically (in physical sense; mathematicians may argue of course because they may add some abstract notion to these terms) synonyms. I’ll try to explain my understanding below.

Let’s see what happens with the TEM as it works in a transmission line with reflection. We have a series of steps in voltage which eventually form an exponential envelope. If we examine these steps, they show discrete sections in time and amplitude. The values of time sections between these steps are determined by the finite and specific characteristics of the geometry of the transmission line and the properties of the (dielectric) medium. The value of the amplitude levels between these steps is determined by the electrical properties of the line and the power level of the source.
So, basically, these discrete values associated with the energy entrapment in the transmission line (TL) are determined by the inherent characteristics of the matter and the energetic stimulus.
If we stimulated the TL with periodic changes in the energy current, we could observe the periodic process with discretised values in those steps – the envelope of which could be a sequence of charging and discharging exponentials.
I suppose if we set up a transmission line (which is largely capacitive in the above) with an inductance, so we’ll have the LC oscillator; this would produce a periodic, similarly step-wise, discretised process whose envelope will be a sine wave.

Now, if we analyse such a system in its discretised (rather than enveloped) form, we, if we want, could produce some sort of histogram showing the distribution of how much time the object in which we trap energy current, spends in what level of amplitude (we could even assign specific energy levels). Now we can call such an object a “Quantum Object”. Why not? I guess the only difference between our “quantum object” and ones that Quantum Physicists are talking about would be purely mathematical. We know the object well and our characterisation of the discretised process is deterministic, but they don’t know their discretised process sufficiently well and so they put probabilities.

If the above makes any sense, may I then make some hypotheses?

We live in the world that has finite size objects of matter, however large or small they are. These objects have boundaries. The boundaries act as reflection points on the way of the energy current. Hence associated with these objects and boundaries we have entrapments of energy. These entrapments, due to reflections give rise to discretisation in time and level. The grains of our (discretised) matter can be quite small so the entrapments can be very small and we cannot easily measure these steps in their sequences, but rather characterise by some integrative measurements (accumulate and average them – like in luminescence), hence at some point we end up being probabilistic.

One more thing that bothers me is associated with the verticality of steps and their slopes.
Let’s look at the moment when we change the state of a reed-switch or pull up the line to Vdd or down GND. The time with which this transition takes place is also non-zero. I.e., even if the propagation of the change is with the speed of light, modulo the epsilon and mu of the medium, i.e. with finite time to destination, the transition of the voltage level must also be associated with some propagation of the field, or forces, inside the reed-switch or in the transistor, respectively, that pulls the line up or down. Clearly that time-frame is much smaller than the time frame of propagating the energy current in the medium along the transmission line, but still it is not zero. I presume that, quite recursively, we can look at the finer granularity of this stage change and see that it is itself a step-wise process of some reflections of the energy current in that small object, the switch, and what we see as a continuous slope is actually an envelope of the step-wise process.

Eliminating “competitors” by not giving them enough energy

One of possible strategies for differentiating some types of electronics from other types is to stage a “power-modulated competition” between them, by gradually tuning power source in different ways, for example in terms of power levels, either through voltage level or/and current level, also in dynamic sense as well. The circuits that require stable and sufficiently high level of voltage will be gradually eliminated from the race … Only those who can survive through the power dynamic range context will pass through the natural selection!

Building such a test bed is an interesting challenge by itself!