You can read about Victor Varshavsky’s contributrions in this document:https://web.cecs.pdx.edu/~mperkows/CLASS_573/Asynchr_Febr_2007/M.pdf

I am immensely proud to be one of his disciples.

]]>

The lecture was on my relatively recent ideas of bringing machine learning into computing at different scales and levels of abstraction, basically making it a commodity that can be introduced for improving the quality of computing from many aspects, in particular performance and use of energy.

The advert of the lecture can be found here: https://informatics.tuwien.ac.at/news/2199

There is also a recording of the lecture available here: https://tube1.it.tuwien.ac.at/w/ebJrRwrJP2ozpsoWAyfy3T

]]>Petri nets are known to capture in a very natural for comprehension form ideas of causality, concurrency and choice. The way how the Petri nets primitives – transitions (bars or boxes), places (circles), flow relation between them (arcs between places and transitions and between transitions and places), and marking of places (tokens) – can ‘speak’ in the languages of EM pulses propagating in transmission lines and interacting in their cross-points is quite interesting. For example, firstly, due to the fact that EM or to be precise TEM pulses cannot wait for each other in the points of crossing, is expressed in the corresponding Petri nets by the fact that we have not multiple incidence of flows on the same transition. In other words, we cannot have AND causality in TEM switching structures. On the other hand, secondly, due to the physical superposition of TEM pulses travelling from different sources, we have a pure effect of OR causality, manifested itself in the Petri nets having multiple incidence of flows of tokens on the same place. Thirdly, the fact that pulses are propagated and reflected in the points of crossing according to the proportions dictated by the impedance rations, represented by the scattering matrices, is manifested in the Petri net model by the corresponding fractioning and additions of tokens in places standing for these pulse interactions.

The type of Petri nets characterising TEM pulse interaction is fairly unique and is worthy separate investigation. For example, the EM nature of information flow in such structures has the property of reciprocity, i.e. the ‘execution runs’ in these processes can be played back to the original states, and hence the modelling Petri nets possess a certain notion of reversibility. In his PhD study, Alex Ventisei, is planning to advance this modelling work further to capture more complex structures of TEM pulse interactions, and complement the existing methods of modelling based on scattering matrices with graphical models using such Petri nets, as well as develop some simulation and analysis tools.

]]>

Herein I am reproducing our exchange:

**Alex Yakovlev:**

An example of the demonstration that quantum physics is NOT the way to explain physics. Here is a story about lasers from Electronics Weekly:

+++++++++++++++++++++++++++++++++++

“Markus Pollnau, Professor in Photonics at the University of Surrey, said: “Since the laser was invented in 1960, the laser spectral linewidth has been treated as the stepchild in the descriptions of lasers in textbooks and university teaching worldwide, because its quantum-physical explanation has placed extraordinary challenges even for the lecturers.

“As we have explained in this study, there is a simple, easy-to-understand derivation of the laser spectral linewidth, and the underlying classical physics proves the quantum-physics attempt of explaining the laser spectral linewidth hopelessly incorrect. This result has fundamental consequences for quantum physics.””

++++++++++++++++++++++++++++++++++

**Dave Walton:**

I have puzzled for some time over two simple questions:

1. How long is a photon?

2. Why is the wavelength of light so much longer than the physical size of an atom.

As far as question 1 is concerned, the authors of this paper seem to be seeing the photon as a truncated sine wave which leads to its finite spectral width when expressed as a Fourier transform. This can lead directly to a calculation of photon length.

Question 2 is a puzzle to me. As engineers we are familiar that the size of an effective antenna must be of the order of the radiated wavelength. (1/4 wave or 5/8 wave etc). But the atom is much smaller than the wavelength of emitted light. The type size of an atom is 5×10^-10 m, but the wavelength of light is of the order of 5 x 10^-7 m, which is a factor of 1000 times larger. So how does it succeed in being such an efficient radiator?

Of course, Quantum Theory does not address these questions at all. It simply states that the ‘Quantum Jump’ happens, and we cannot dissect it any further.

**Alex Yakovlev:**

I like both of your questions a lot.

Questions about real physical size dimensions are most pertinent if we think about energy current propagating in space (but how else to think?!).

Quantum Theory does not seem to address the dynamics of energy in real space, succumbing to abstract transitions between states in phase (state) space.

Reading your points regarding these questions is fascinating. Re: Q2, in particular, it begs for something like the sine wave period (i.e., wavelength) can only be 1000 longer than the atom IFF this sine wave is constructed of many (order of 1000 or so) steps, each the size of the atom, and each such step is the time of flight of the ExH current travelling between the ‘walls of the atom’. Pretty much like we have the time constant of the capacitor charge (discharge) exponential via resistor R, where the cap is a TL.

Isn’t it?

So, generally all these different wavelengths, can they be the result of the epsilon/mu (i.e. characteristic impedance of the medium) plus sizes of the unit of space that generates the light producing the sine wave in a manner similar to an L&C TL?

And photon is indeed a section of that sine wave.

I reproduced in my RS paper [cf. Energy current and computing | Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (royalsocietypublishing.org)] your derivation of the time constant for the exponential charge/discharge of the stepwise processes, i.e., tau= eps*l*R/f, where l is the TL length, f is a geometric parameter linking the unit cap f=eps/c.

Perhaps, we can derive the period of the sine wave for the case of the L&C pair of TL? and this way determine the length of photon?

**Dave Walton:**

Very Interesting Alex.

It is amazing how the obvious can escape one for so long, but I had honestly never considered a transmission line model for the atom (!!).

Given the factor of 1,000 between the atomic diameter and the wavelength of the photon, this would suggest about 1000 transitions before all of the energy current escapes. This suggests a reflection coefficient of the same order, i.e., 1/1000.

The next questions would seem to be,

1. What is the energy current arrangement in the atom?

2. How exactly do these arrangements change during the process of emission?

I would be very interested to hear your thoughts on these and other issues which arise.

**Alex Yakovlev:**

Just to clarify, if we have 1000 steps for one swing of the wave (or exponential) between High and Low, should not the reflection be a complement, i.e., 999/1000.

That is basically to say that the characteristic impedance of the internals of the atom is 1000 times smaller than the ohmic impedance of the interface, right?

For comparison, Fig. 10 of Forrest’s paper

http://www.naturalphilosophy.org//pdf//abstracts/abstracts_6554.pdf

Shows quite an opposite effect where the TL is so long that the time of flight in TL is twice that of the time constant of the equivalent RC circuit. Here, despite the fact that the reflection coefficient is even negative, the step is commensurate with the time constant. So, this is probably the effect of the capacitance of the TL being relatively small.

So, we somehow, need to take into account not only the Z0 vs R ratios, but also C as a function of the length and epsilon, which in the atomic case needs to be relatively large compared to ordinary coax like TLs. For the atom, we probably talk about much higher C per size unit, i.e., very high epsilon, right?

Incidentally, can we also calculate the frequency/clock period of the TL-based LC circuit? Was it your or Mike Gibson’s derivation of the sinewave for the TL-based LC oscillator? In Ivor’s Electromagnetics 1 book, I cannot find the frequency-period parameters of the sine wave, so we could work out some likely L and C values for the atom.

Your Qs:

1. What is the energy current arrangement in the atom?

That is a good question. If we think about 3D this might be some kind of cube with the Poynting vector rotating around the nucleus, with E directed radially and H azimuthally?

2. How exactly do these arrangements change during the process of emission?

What would trigger emission? some sort of window opening so that the sine wave will be radiate out?

**Alex Yakovlev, continued:**

Just played a bit with numbers.

Suppose we play with a cap model of the atom, and we’d like to find the length unit C, bearing in mind that the exponential’s time constant tau should be 1000 that of the step.

The key equation is

RC*l = 1000*l/c

R is the resistance of discharge

C is unit length cap

l is length

c is speed of light in the medium, let’s say 2*10^8 m/s

eliminating l, we have

RC=1000/c,

suppose R=1000Ohm.

We have C=1/c=5nF/m

Does it sound reasonable?

Suppose we use the parallel plate cap model:

C=eps*(w/a), where w is width and a is distance between the plates.

What should they be?

Suppose the ratio between w and a, w/a is 5000.

This means that our eps then must be 10^(-12), does it sound realistic?

In vacuo, I think eps0 is about that order, 8.85*10^(-12).

**Dave Walton:**

Yes, the numbers do seem reasonable, but would you agree that the real challenge is to understand (or at least model) how the energy current distribution changes when a photon is emitted. This should ultimately lead to the undemanding of the probability distribution in the orbitals.

This is tough.

**Alex Yakovlev:**

Dave,

This is puzzling.

Assuming that the atom is an LC loop with distributed parameters and a normally closed door, then it emits a photon in the form of an exponential/sine wave section. What then happens? My knowledge of atomic physics is rusty and weak. But presumably the aperture and interval of door opening depends on some external factors, right?

It’s a bit like we control the switch (externally) for a TL when the energy current comes out during the discharge process. Can we extract energy from Capacitive or LC-ive by portions – sections of exponential or chunks of sine wave?

I am pretty sure there should be a deterministic model – or at least one based on some sort of histograms of frequencies of spending time in different states – rather than purely stochastic probability model.

On a slightly different yet related side:

It seems that with energy current trapped in units like atoms or TLs, we have to deal with the two levels of dynamics:

1) higher frequency one – which is concerned with the vacillating TEM inside the atom or TL – basically where the step or period is determined by the eps and mu parameters and the geometry of the atom/TL

2) lower frequency one – which is concerned with some macro-elements – like resistors and switches that are controlled outside – these are parameters like time constants of charge/discharge exponentials or periods of sine waves.

Interestingly, that (1) is about the level of 100s of THz – that close to infrared and higher frequency light – people seem to know how to sense it at the level of photonic materials

(2) is the modern analogue electronics – with lumped LC loop antennas – somewhere up to 100 GHz.

What’s between is some sort of dead zone – a gap (between materials level and circuits level), where not much can be done.

This is my own perception, maybe I am wrong. I wonder what you think about it.

]]>Why should we associate the emergence of ohmic losses with the basic ExH energy current propagation. At this level there is no point to talk about ohmic losses because at this level we have no idea about any charge movement, i.e. no electric current is defined here. It’s all about EM energy current.

Ohmic losses need only be considered at the level of the superposition of Electric and Magnetic fields, i.e. at the level of V and I values in a particular place in space and a particular point in time.

If we take any of Wakefield experiments, we can find in them an interval of time at a certain place where the odds of overlap of two reciprocating waves are such that they produce no current (i.e. the cumulative effect on magnetic field is zero), so there are ohmic losses here and then. This does not however preclude the two ExH waves to move in opposite directions.

That can only get worse as people are increasingly living in virtual reality.

]]>Engineers are commonly regarded as good if they can quickly look up the bag of available models and tools and identify the right one to apply.

Any attempt to be doubtful or suspicious, or even discard the canon for a completely new way of expression is penalised by the Pharisees of the society appointed by the society to be its so called “Scientific Advisors”.

Anglo-American culture is particularly characteristic of that – with its love for all kinds of pub quizzes, TV game shows, praising for quickly detecting and nodding to quick recognition of soup opera catch-phrases, quoting Bob Hope or aphorisms and what not. Including of course, pretence to be intellectual by showing off in using Latin or French expressions … As long as nothing is going deep into understanding of the English or let alone any foreign language grammar (why bother with that if you can simply build your sentences out of phrasal verbs of which English is so rich), or even deeper learning and thinking in different languages.

So, whether the members of this society want to be unique or original when moving into science or other intellectual activity forms, they remain apples that don’t fall far from their trees.

]]>(cf. the **Static Fields** rubric on the wiki page about Poynting Vector: https://en.wikipedia.org/wiki/Poynting_vector)

**The corollary of the proposition proven earlier is that there is NO static fields** per se.

Of course we need to say what we mean by ‘static’ here. Well static means – Not moving! A common online English dictionary defines static (adjective) as follows: lacking in movement, action, or change, especially in an undesirable or uninteresting way.

So, I then have the full right to surmise that Static fields do not move with speed of light according to this definition. So, there is a contradiction with the proof. Therefore, the only way to resolve it is to conclude that Static Fields DO NOT have the right to exist!

Indeed, what is believed to be static is actually a superposition or contrapuntal effect of normally moving fields (Poynting vectors to be precise), where their stepping or pulsing effects are not visible. A normal illusion due to superposition.

One might ask but what about for example a cylindrical capacitor shown on //en.wikipedia.org/wiki/Poynting_vector ?

The answer is that – just the same thing – the are at least two power flows of ExH form there – like two conveyor belts of sheaths moving against one another, where the H (magnetic components are superposed and show the cumulative effect of H=0). Just short-circuit this cylinder from at least one edge, and you will see the effect of transition (redistribution) of the magnitudes of E and H so that the total amount of power ExH crossing the spatial cross-section will remain the same.

So **Static Field** (*as being static in the sense of the above definition*) is an **illusion** – just another H G Wells’ Invisible Man visiting us!

(see my earlier post: https://blogs.ncl.ac.uk/alexyakovlev/2019/09/14/wakefield-4-experiment-causal-picture-in-energy-current/ and Ivor Catt’s original paper on Wakefield 1: http://www.ivorcatt.co.uk/x343.pdf)

**Alex
Yakovlev**

**13 August
2020**

**The main
hypothesis is:**

H: EM energy current in the form of ExH (aka Poynting vector) can only exist in motion with a speed of light.

**Experiment:**

Consider a Wakefield experiment with a Tx Line that is initially discharged.

At time t=0, the TL is connected at point A (left-hand side) to a source 10V, where it is terminated with an open circuit. Point B is in the middle. Point C is at the right-hand side and is short-circuited.

Wakefield shows that:

At point A we have a square shape oscillation between +10V (half-time) and -10V (half-time).

At point C we see no changes – completely discharged line at 0V.

At point B we have the following cyclically repeated sequence of phases: (a) 0V (quarter time), (b) +10 (quarter time), (c) 0V (quarter time), (d) -10V (quarter time).

A similar analysis can be carried out with an initially charged TL which is short-circuited at point A and is open-circuited at point C.

**Experimental
fact:**

W: We observe contrapuntal effects in Wakefield, such as in Point B we have phases (a) and (c) where the cumulative effect of ExH field waves makes them look observationally equivalent – at 0V, yet leading to different subsequent behaviour, i.e. from (a) it goes to (b), and from (c) it goes to (d).

**The
proposition:**

P: The contrapuntal effects that we observe in Wakefield hold if and only if ExH can only exist in motion with a speed of light.

In other words, we state that W is true if and only if H holds, i.e. H is a necessary and sufficient condition for W.

**Proof:**

Sufficiency (H->W):

Suppose H is true. We can then easily deduce that at every point in space A, B and C, the the observed waveform will be as demonstrated by Wakefield.

(Ivor’s website contains my prediction for Wakefield 3 with contrapuntal behaviour – the analysis was based on Ivor’s theory – i.e. hypothesis H, and it was correctly confirmed by the experiment. For details see: http://www.ivorcatt.co.uk/x91cw34.htm and http://www.ivorcatt.co.uk/x842short.pdf)

Necessity (W->H, which is equivalent to not H -> not W):

Suppose H does not hold, i.e. at some point in space and/or in time, ExH is stationary or does not travel with speed of light. Let’s first look, say at point C. We see a “discharged state” – it corresponds to what we may call stationary state electric field, i.e. E=0 – a discharged piece of TL. Here we can possibly say that the voltage across it is constantly equal to 0 because at C it is short-circuited.

Next, we look at point B at the time when the voltage level is equal to 0V, say in phase (c). We think it is a static E=0. Using the same argument as we did for point C. One might argue that the point B is not short-circuited, but this does not matter from the point of view of our observation – it’s just 0V.

How can we predict that after a specific and well-defined time interval, voltage at B will go down to -10V and not up to +10V as it would have gone had we been in phase (a)? In other words, how can we distinguish the states in those two phases using classical theory, where phase (a) is observationally equivalent to phase (c).

The only way we could predict the real behaviour in W with classical theory if we had some ADDITIONAL memory that would store information, in another object, that although we were stationary here in that place and time interval, we were actually being in transit between phases (b) and (d) rather than being in transit between (d) and (b).

The fact that we need ADDITIONAL memory (another TL) is something that is outside the scope of our original model, because we did not have it organised in the first place. So, there is no knowledge in the original model that will make us certain that from phase (c) we will eventually and deterministically go to phase (d).

**Q.E.D.**

**Note:** The above fact of having phases (a), (b), (c) and (d) is the result of the contrapuntal effect of the superposition of the partial actions performed by the steps moving in the right and left directions. And unless that motion was always (in time and in space) with a well-defined speed (speed of light), we would not be able to predict that from phase (c) we will definitely and only transition to phase (d) and not to phase (b) and how quickly that transition will happen. The case of a fully charged or fully discharged capacitor, with seemingly stationary E field, that is a contrapuntal effect of superposed motion of ExH in all directions, is just a special case of the TL.

Remark from David Walton:The only way we could predict the real behaviour in W with classical theory if we had some ADDITIONAL memory that would store information, in another object, that although we were stationary here in that place and time interval, we were actually being in transit between phases (b) and (d) rather than being in transit between (d) and (b).

is the key point.

Another way to state the same thing in different context and less formally (I think) is to point out that when two pulses travelling in opposite directions pass through each other either the B or E fields will cancel, hence demonstrating that the field cannot be the cause of the onward propagation of the em pulse.

**My response:**

That’s a great point you make. Indeed the absence of either B or E in the contrapuntal state disables us from the ability to talk about further propagation of the pulses.

Yes, the key point is the absence of memory about the dynamical process in the classical field model.

In summary:

**Illusions … How many we have every day because we don’t really know they are happening around us (not enough sensors or memory to track things). The contrapuntal effects are those that H G Wells probably had in mind in the shape of the Invisible Man. They blind us from reality …**

Ego is a form of trapped knowledge (maybe also leads to gravity).

]]>The law of energy conservation as it is being presented to students and understood is rather abstract as it begs for many interpretations, because energy exists in its permanent and omnipresent motion. Even if it is trapped in a fragment of space like a capacitor or an elementary particle it is in motion.

So, what seems to be less convoluted is the law that energy can only exist in motion and it can only move at speed of light. That’s actually what conservation of energy is. This is true by Occam’s razor principle and does not need to be proven. So, it is necessarily so before or after the switch [between voltage source and a capacitor] is closed … and without this law we would not have had those prefect contrapuntal effects, including those that ’cause’ people to think we have stationary conditions in capacitors and transmission lines.

He was a pioneer of research in metastability , arbitration and

synchronization as well as VLSI design led Microelectronic Systems Design group at Newcastle for 20 years.

We generated many ideas for projects, PhD research, papers, design tools, conference and industrial presentations. Above all, we just enjoyed spending time in discussions about science, culture and genealogy. David and his wife Anne welcomed on many occasions the whole Newcastle MSD team in their wonderful Sike View house in Kirkwhelpington in the middle of Northumberland.

By lovely coincidence it wouldn’t have been a better occasion yesterday that I was kindly invited to give a lecture “Becoming a Researcher: from Follower to Leader” to the wonderful 100+ audience of Iraqi researchers – the invitation came from my PhD alumni Dr Ammar J M Karkar, Professor and Director of IT Research and Development at University of Kufa, Iraq.

The lecture is now available on YouTube https://youtu.be/JnfObxmTslc

All the best!

Often people talk about a mix of both views, and that’s where many paradoxes and contradictions happen. For example, there is an interesting ‘puzzle’ that has been posed to the world by Ivor Catt. It is sometimes called Catt’s question or Catt Anomaly.

http://www.electromagnetism.demon.co.uk/cattq.htm

Basically, the question is about: when a step in voltage is transmitted in a transmission line from a source to the end, according to the classical EM theory charge appears on both wires (+ on the leading wire, and – on the grounded wire): **Where does this new charge come from?**

Surprisingly, there has not been a convincing answer from anyone that would not violate one or another aspect of the classical EM theory.

Similar to this, there is a challenge posed by Malcolm Davidson, called Heaviside Challenge https://www.oliver-heaviside.com/ that hasn’t also been given a consistent response even though the challenge has been posed with a 5 thousand USD prize!

So it seems that there is a fundamental problem in reconciling the two worlds, in a consistent theory based on physical principles and laws, rather than mathematical abstractions.

However, there is a hope that with the way to understand and explain EM phenomena, especially in high-speed electronic circuits, is through the notion of a Heaviside signal and the principle of energy-current (Poyinting vector) that never ceases from travelling with the speed of light in the medium. In terms of energy current perfect dielectrics are perfect conductors of energy, whereas perfect charge conductors are perfect insulators for EM energy current.

So, while those who prefer the charge based view of the world may continue to talk about static and dynamic charges, those who see the world via energy current live in the world where there is no such a thing as static electric or magnetic field, because TEM signal can only exist in motion with a speed of light in the medium. Medium is characterised by its permittivity and permissibility and gives rise to two principal parameters – speed of light and characteristic impedance. The inherent necessity of the TEM signal to move is stipulated by Galileo/Newton’s principles of geometric proportionality, which effectively define the relations between any change of the field parameter in time with its change in space. Those two changes are linked fundamentally, hence we have the coefficient of proportionality delta_x/delta_t, also known as speed of light, which gives rise to causality between the propagation of energy or information and momenta of force acting on objects with mass.

Another consequence of the ever-moving energy current is its ability to be trapped in a segment of space, pretty much what we can have in a so called capacitor, and thus form an energized fragment of space, that gives rise to an object with mass, e.g. a charged particle such as an electron. So, this corollary of the first principle of energy current paves the way to the view of EM that is based on charged particles.

]]>

The * four main sects* of (then religious) thinkers were:

**Sadducees** – conformists to the Greco-Roman rulers

**Pharisees** – purists and devotees to the established canon

**Essenes** – ‘holy’ ones waiting for Messiah

**Zealots **– radical and militant ones

There were also Scribes, but they were a sort of what Ivor Catt calls Parrots and they weren’t influential – they were often closer either to Pharisees or Sadducees.

An interesting self-test is to think which one (or none, or several) of them each one of us belongs.

Long time ago (approx. 40 years), when my father took over as head of control engineering department in St. Petersburg electrical engineering institute (LETI) from the previous head, Professor Alexander Vavilov, at their school they were excited by exploring the idea of evolutionary synthesis of control systems. One crucial part of this study was the development of theory of structural synthesis, where models of the system at each level of granularity had to be adequate to the criteria of optimal control. (By the way, graphs were essential in those models)

The basic idea was that depending on the level of granularity (or hierarchy) considered by the modeller, the system can have completely different criteria of correctness and/or optimality, hence certain aspects that are significant at small scale may not be important at a larger scale.

A bit like the criteria of control in the national level is not the same as criteria for control at the municipal level, and not the same at the level of local community, and not the same at the level of family units and individual households.

So, because of these differences and clashes of interest between different levels there is a lot of anxiety and misunderstanding in societies.

So, what the relationship with COVID-19?

Well the relationship is direct.

Let’s take the data on Mortality 2017 from the UK National Statistics: https://www.ons.gov.uk/visualisations/dvc509/chart1/index.html

This data shows that the number of deaths across the country in one year is significant – hundreds of thousands – not far from 1M. The relative number of deaths, that we witness now as a result of COVID-19 even if it will hit 10K-20K would be quite small though.

So, we clearly have different perspectives here, one is national (spatial) that stretches across the whole year (temporal), while the other could be local (e.g. an area of population in London) and taken during these 2-3 weeks of March-April. The relative increase in the number of deaths at the national scale is a small bump on the curve. I.e., integrating the number of deaths, caused by respiratory problems thanks to COVID-19, at the national scale will not give much effect to the game of the totals.

However, if we look spatiotemporally at the small scale we may see a significant rise in terms of differential and even proportional response. So, if we are particularly sensitive to these two aspects, differential and proportional, we may actually decide to react with a powerful action.

What we are facing here is exactly what I started with in my blog. We are facing with the different levels of granularity (or hierarchy, whatever we call it). Consider the coarse granularity. From this point of view our Mother Nature in us may say, well, why bother, the integral response (let’s denote it by letter I) is very small, and we look at time intervals of decades, so there is no need for any great change in decision-making. The problems of environmental nature are much more serious.

But let’s go down to the level of individuals, especially those living in the most affected areas of COVID-19. Again, our Mother Nature in us would tell us, that the rise in deaths due to coronavirus is an alarm, it may trigger a disaster, we may lose the loved ones, lose a job and income. What’s happening here? It’s actually that at the lower granularity level, the criteria for decision-making are based on differential and proportional responses (let’s denote them D and P). So, in mathematical terms at different levels of granularity we apply different coefficients, or what engineers call gains, to these aspects P, I and D, and form our decisions according to those gains or criteria of importance.

So, ultimately it is vital that the data we use, and the models which characterise this data in time and in space, where we calculate partial or full derivatives and integrate in space and time, or proportionalise in space and time, must be adequate to the criteria of significance we apply, and lead to corresponding decision-making at the appropriate level.

No doubt, the nations that are harmoniously hierarchical and fractally uniform, may have less problems in matching criteria of optimality with the P, I and D responses brought be the models from the actual data.

Yet, again we face that PID-control seems to rule the world we live in!

]]>