Alex's EnyMoCo

Resources drive the world …

Alex's EnyMoCo

Discourse on the helpfulness of Natural Philosophy for the Heaviside signal continues

Natural Philosophy does not provide Causality to the Cause-effect relation between electromagnetic events

Further to my previous post on Causality forms, here are comments from Ed Dellian (my points are italicised) .

Basically these comments lead me to a conclusion that Natural Philosophy cannot explain cause-effect relations between events taking place in electromagnetics!

Several kinds of causality?

In the last few days, I have been discussing with Ed Dellian the notion of causality, in relation to electromagnetics.

Here are some interesting issues from this discussion.

An important question is what we call “Causality”, the “cause-effect” relation. Can we call causality a relation between events happening without involvement of matter (or mass), or a relation that is only between events involving material objects. The latter seems to follow from Newtonian physics and so called “geometric proportionality”.

So, let me define two forms of what seem to be in the realm of causal relationship. Here ExH is the Poynting vector (cross-product of two vectors – E electric field and H magnetic field).

(Form 1)

– an event on ExH (say, step from 0V to 4V) taking place at point A of the transmission line – Cause;

– an event on ExH (step from 0V to 4V) taking place 200 picoseconds later at point B of the transmission line – Effect

(Form 2)

– an event on ExH (say, step from 0V to 4V) taking place at some point X of the transmission line – Cause;

– a move (change in motion) of a particle with finite mass next to point X – Effect.

I think this is an important question. It concerns two forms of transfer of energy:

(1) at the level of energy current between a change in energy current (force) and  another change in energy current (force) – this does not involve matter

(2) at the level between a change in energy current (force outside moving matter) and motion (change of motion).

So, far the view from physical philosophers like Ed Delian is dismissive of Form 1, and sort of partially aligning with Form 2.

This is what he wrote to my question about these forms:

The requirement of causality is to distinguish between cause (A) and effect (B) being quantities of physical entities (A, B)  differing in kind (lat. genus) like apples and pearsWhether physical entities differ in kind can be found by analyzing their dimensions. Cause A (dimension A) and effect B (dimension B) are entities with different dimensions (different entities). Consequently a mathematical law of causality (generation of effect B by a generating cause A) cannot read B = A. The only reasonable mathematical relation between such different quantities (if there is any) is a geometric proportionality according to A/B = C = constant. The dimensions of the constant C accordingly will be given [A/B].

So your “form 1” where you deal with two “events” of a same kind has nothing to do with causality.

What about “form 2”? There is at point X what you call an “event on ExH”, and there is, as you say, “a move of a particle next to point X”. Now, should the “move” of the particle be lawfully related to the “event”, for example, to the event being symbolized by E, and should p be proportional to E according to E/p = c = constant, this would describe a causal relation between E and p.

But how to apply this example to the problem of “energy current” in a transmission line? As I see things, the observable – the “effect” – is not a “move” of something material from A to B at the transmission line. By analogy I would say, the effect at X is a transfer of “momentum” p from one particle, or pendulum bob, to the other, or, as in a billiard game, from one ball to the other, caused by a “force” impressed on the particle, which “force” some call “energy”. Consequently, there is no moving “energy current”; rather the cause  “energy” must already be “there” at every point of the transmission line, so that it can locally generate the effect of “transfer of momentum from particle to particle” according to the law E/p = c = constant as soon as the switch is turned to light the lamp. So I would say, the impression of “energy current” is only due to (1) confusing the effect with the cause, and (2) confusing the scalar “velocity c” of generation of an effect in space and time with a vector velocity v of “current”, that is, material transport from A to B. I think the billiard ball example is striking: You can observe the velocity v of the rolling ball A, and you can observe that the momentum of A is “immediately” transferred to the ball B in the collision. The generation of the momentum of B takes place at the “velocity of generation” c, which has nothing to do with the velocity v of the rolling balls. Analogously may happen the generation of momentum p as an effect of cause E at every point of a transmission line, which, as the generated momentum p “propagates” through the line (propagating in one direction since the cause E is a vector!),  only apparently indicates a “current” (move from A to B) at “velocity c”. 

 

To which I replied (quoting him first):

The requirement of causality is to distinguish between cause (A) and effect (B) being quantities of physical entities (A, B)  differing in kind (lat. genus) like apples and pearsWhether physical entities differ in kind can be found by analyzing their dimensions. Cause A (dimension A) and effect B (dimension B) are entities with different dimensions (different entities). Consequently a mathematical law of causality (generation of effect B by a generating cause A) cannot read B = A. The only reasonable mathematical relation between such different quantities (if there is any) is a geometric proportionality according to A/B = C = constant. The dimensions of the constant C accordingly will be given [A/B].

What about causality of the same kind (species) – parent to child?

So your “form 1” where you deal with two “events” of a same kind has nothing to do with causality.

So, what is this? Clearly the event B that is further from the source of the step – it cannot happen before A. In fact it can only happen after event A, and moreover this “after” happens L/c  time units later – where L is the distance between points A and B in the transmission line.

And we can’t deny this effect because this is what we see in the experiments.

I can interpret this as geometric proportionality with coefficient k, which is dimensionless in your terms.

But, incidentally, who said that geometric proportionality should be defined by the algebraic division operator?

Physical world can suggest us other forms of proportionality – for example, we can define proportionality in the form of a time-shift operator?

Please not that I am not dismissing your definition of causality as being limited. I am just looking for a form of expressing the event precedence effect in transmission line, which is what we see in our experiments. Ivor’s theory underpins it with the notion of “Heaviside signal” (aka “energy current”).

The search for truth on causality continues ….

 

My talk at the 2nd Workshop on Reaction Systems

Following the 1st School on Reaction Systems in Torun, Poland, there was the 2nd Workshop on Reaction Systems, also held in Torun.

The workshop programme is listed here:

http://wors2019.mat.umk.pl//workshop/

I gave a talk on “Bringing Asynchrony to Reaction Systems”. This talk was work in (pre-)progress. Mostly developed during the Reaction Systems week in Torun.

The abstract of my talk is below:

Reaction systems have a number of underlining principles that govern them in their operation. They are: (i) maximum concurrency, (ii) complete renewal of state (no permanency), (iii) both promotion and inhibition, (iv) 0/1 (binary) resource availability, (v) no contention between resources. Most of these principles could be seen as constraints in a traditional asynchronous behaviour setting. However, under a certain viewpoint these principles do not contradict to principles underpinning asynchronous circuits if the latter were considered at an appropriate level of abstraction. Asynchrony typically allows enabled actions to execute in either order, retains the state of enabled actions while other actions are executed, involves fine grained causality between elementary events and permits arbitration for shared resources. This talk will discuss some of these potential controversies and attempt to show ways of resolving them and thereby bringing asynchrony into the realm of reaction systems. Besides that, we will also look at how the paradigm of reaction systems can be exploited in designing concurrent electronic systems.

The slides of my talk are here

 

My lecture on Asynchronous Computation at the 1st School on Reaction Systems

The 1st School on Reaction Systems has taken place in historical Toruń, Poland.

Organised by Dr Lukasz Mikulski and Prof Grzegorz Rozenberg at the Nicolaus Copernicus University.

I managed to attend a number of lectures and gave my own lecture on Asynchronous Computation (from the perspective of electronic designer).

Here are the slides:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Torun-Yakovlev-lecture-final.pdf

Ideas picked up at the 1st School on Reaction Systems in Torun, Poland

Grzegorz Rozenberg’s lecture on Modularity and looking inside the reaction system states.

  • Some subsets of reactants will be physical – they form modules.
  • Stability implies lattice: a state transition is locally stable if the subsets (modules) in the states are isomorphic. These subset structures form partial order, so we have an isomorphism between partial orders. So, structurally, nothing really changes during those transitions – nothing new!
  • Biologists call this “adulthood”. It would be nice to have completion detection for that class of equivalence!

Paolo Milazzo’s talk (via Skype) on Genetic Regulatory Networks.

  • Some methods exist in gene regulation for saving energy – say by using lactose (as some sort of inhibitor)
  • He talked about sync/async Boolean networks of regulatory gene networks.

Paolo Bottoni on Networks of Reaction Systems.

  • Basic model – Environment influences the reaction systems
  • Here we consider reaction systems influences the environment

Robert Brijder on Chemical Reaction Networks.

Hans-Joerg Kreowski on Reaction Systems on Graphs.

  • Interesting graph transformations as reaction systems.
  • Examples involved some graph growth (e.g. fractal such as Serpinski graphs)

Grzegorz Rozenberg on Zoom Structures.

  • Interesting way of formalizing the process of knowledge management and acquistioon.
  • Could be used by people from say drug discovery and other data analytics

Alberto Leporati on membrane Computing and P-systems.

  • Result of action in a membrane is produced to the outside world only whne computation halts.
  • Question: what if the system is so distributed that we have no ability to guarantee the whole system halts? Can we have partial halts?
  • Catalysts can limit parallelism – sounds a bit like some sort of energy or power tokens

Maciej Koutny on Petri nets and Reaction Systems

  • We need not only prevent consumption (use of read arcs) but also prevent (inhibit!) production – something like “joint OR causality” or opportunistic merge can help.

 

New book on Carl Adam Petri and my chapter “Living Lattices” in it

A very nice new book “Carl Adam Petri: Ideas, Personality, Impact“, edited by Wolfgang Reisig and Grzegorz Rozenberg, has just been published by Springer:

https://link.springer.com/book/10.1007/978-3-319-96154-5

Newcastle professors, Brian Randell, Maciej Koutny and myself contributed articles for it.

An important aspect of those and other authors’ articles is that they mostly talk about WHY certain models and methods related to Petri nets have been investigated rather than describing the formalisms themselves. Basically, some 30-40 years of history are laid out on 4-5 pages of text and pictures.

My paper  “Living Lattices” provides a personal view of how Petri’s research inspired my own research, including comments on related topics such as lattices, Muller diagrams, complexity, concurrency, and persistence.

The chapter can be downloaded from here:

https://link.springer.com/chapter/10.1007/978-3-319-96154-5_28

There is also an interesting chapter by Jordi Cortadella “From Nets to Circuits and from Circuits to Nets”, which reviews the impact of Petri nets in one of the domains in which they have played a predominant role: asynchronous circuits. Jordi also discusses challenges and topics of interest for the future. This chapter can be downloaded from here:

https://link.springer.com/chapter/10.1007/978-3-319-96154-5_27

 

Superposing two levels of computing – via meta-materials!?

Computing is layered.

We have seen it in many guises.

(1) Compiling (i.e. executing the program synthesis) and executing a program

(2) Configuring the FPGA code and executing FPGA code

….

Some new avenues of multi-layered computing are coming with meta-materials.

On one level, we can have computing with potentially non-volatile states – for example, we can program materials by changing their most fundamental parameters, like epsilon (permittivity) and permeability). It is a configurational computing, which itself has certain dynamics. People who study materials and even devices, very rarely think about the dynamics of such state changes. They typically characterize them in static way – like I,V curves, hysteresis curves etc. What we need is to see more time domain characterization, such as waveforms, state graphs …

More standard computing is based on the stationary states of parameters. Whether analog or digital, this computing is often characterized in dynamic forms, and we can see timing and state diagrams, transients …

When these two forms of computing are combined, i.e. that the parameter changes add other degrees of freedom, we can have the two-level computing. This sort of layered computing is more and more what we need when we talk about machine learning and autonomous computing.

Meta-materials are a way to achieve that!

Ultra-ultra-wide-band Electro-Magnetic computing

I envisage a ‘mothball computer’ – a capsule with the case whose outer surface harvests power from the environment and inside the capsule we have the computational electronics.

High-speed clocking can be provided by EM of highest possible frequency – e.g. by visible light, X-rays or ultimately by gamma rays!

Power supply for modulation electronics can be generated by solar cells – Perovskite cells. Because Perovskite cell have lead in them they can insulate gamma rays from propagation outside the compute capsule.

Information will be in the form of time-modulated super-HF signals.

We will represent information in terms of time-averaged pulse bursts.

We will have a ‘continuum’ range of temporal compute which will operate in the range between deterministic one-shot pulse burst (discrete) through deterministic multi-pulse analog averaged signal to stochastic multi-pulse averaged signal (cf. book by Mars & Poppelbaum – https://www.amazon.co.uk/Stochastic-Deterministic-Averaging-Processes-electronics/dp/0906048443)

Temporal Computing (https://temporalcomputing.com) is the right kind of business opportunity for this Odyssey!

Switched electrical circuits as computing systems

We can define computations as processes of working of electrical circuits which are associated with sequences of (meaningful) events. Let’s take these events as discrete, i.e. something that can be enumerated with integer indices.

We can then map sequences of events onto integer numbers, or indices. Events can be associated with the facts of the system reaching certain states. Or, in a more distributed view, individual variables of the system, reaching certain states or levels. Another view is that a component in the system’s model moving from one state to another.

To mark such events and enable them we need sensory or actuating properties in the system. Why not simply consider an element called “switch”:

Switch = {ON if CTRL= ACTIVE, OFF if CTRL = PASSIVE}

What we want to achieve is to be able to express the evolution of physical variables as functions of event indices.

Examples of such computing processes are:

  • Discharging capacitance
  • Charging a (capacitive) transmission line
  • Switched cap converter
  • VCO based on inverter ring, modelled by switched parasitic caps.

The goal of modelling is to find a way of solving the behaviour of computational electrical circuits using “switching calculus” (similar to Heaviside’s “operational calculus” used to solev differential equations in an efficient way).

Some of Leonid Rosenblum’s works

L. Ya. Rosenblum and A.V. Yakovlev.
Signal graphs: from self-timed to timed ones,
Proc. of the Int. Workshop on Timed Petri Nets,
Torino, Italy, July 1985, IEEE Computer Society Press, NY, 1985, pp. 199-207.

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/LR-AY-TPN85.pdf

A paper establishing interesting relationship between the interleaving and true causality semantics
using algebraic lattices. It also identifies an connection between the classes of lattices and the property
of generalisability of concurrency relations (from arity N to arity N+1),
i.e. the conditions for answering the question such as,
if three actions A, B and C are all pairwise concurrent, i.e. ||(A,B), ||(A,C), and ||(B,C), are they concurrent “in three”, i.e. ||(A,B,C)?
L. Rosenblum, A. Yakovlev, and V. Yakovlev.
A look at concurrency semantics through “lattice glasses”.
In Bulletin of the EATCS (European Association for Theoretical Computer Science), volume 37, pages 175-180, 1989.

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/lattices-Bul-EATCS-37-Feb-1989.pdf

Paper about the so called symbolic STGs, in which signals can have multiple values (which is often convenient for specifications of control at a more abstract level than dealing with binary signals) and hence in order to implement them in logic gates one needs to solve the problem of binary expansion or encoding, as well as resolve all the state coding issues on the way of synthesis of circuit implementation.

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/async-des-methods-Manchester-1993-SymbSTG-yakovlev.pdf

Paper about analysing concurrency semantics using relation-based approach. Similar techniques are now being developed in the domain of business process modelling and work-flow analysis: L.Ya. Rosenblum and A.V. Yakovlev. Analysing semantics of concurrent hardware specifications. Proc. Int. Conf. on Parallel Processing (ICPP89), Pennstate University Press, University Park, PA, July 1989, pp. 211-218, Vol.3

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/LR-AY-ICPP89.pdf

Моделирование параллельных процессов. Сети Петри [Текст] : курс для системных архитекторов, программистов, системных аналитиков, проектировщиков сложных систем управления / Мараховский В. Б., Розенблюм Л. Я., Яковлев А. В. – Санкт-Петербург : Профессиональная литература, 2014. – 398 с. : ил., табл.; 24 см. – (Серия “Избранное Computer Science”).; ISBN 978-5-9905552-0-4
(Серия “Избранное Computer Science”)

https://www.researchgate.net/…/Simulation-of-Concurrent-Processes-Petri-Nets.pdf