My lecture on Asynchronous Computation at the 1st School on Reaction Systems

The 1st School on Reaction Systems has taken place in historical Toruń, Poland.

Organised by Dr Lukasz Mikulski and Prof Grzegorz Rozenberg at the Nicolaus Copernicus University.

I managed to attend a number of lectures and gave my own lecture on Asynchronous Computation (from the perspective of electronic designer).

Here are the slides:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Torun-Yakovlev-lecture-final.pdf

Ideas picked up at the 1st School on Reaction Systems in Torun, Poland

Grzegorz Rozenberg’s lecture on Modularity and looking inside the reaction system states.

  • Some subsets of reactants will be physical – they form modules.
  • Stability implies lattice: a state transition is locally stable if the subsets (modules) in the states are isomorphic. These subset structures form partial order, so we have an isomorphism between partial orders. So, structurally, nothing really changes during those transitions – nothing new!
  • Biologists call this “adulthood”. It would be nice to have completion detection for that class of equivalence!

Paolo Milazzo’s talk (via Skype) on Genetic Regulatory Networks.

  • Some methods exist in gene regulation for saving energy – say by using lactose (as some sort of inhibitor)
  • He talked about sync/async Boolean networks of regulatory gene networks.

Paolo Bottoni on Networks of Reaction Systems.

  • Basic model – Environment influences the reaction systems
  • Here we consider reaction systems influences the environment

Robert Brijder on Chemical Reaction Networks.

Hans-Joerg Kreowski on Reaction Systems on Graphs.

  • Interesting graph transformations as reaction systems.
  • Examples involved some graph growth (e.g. fractal such as Serpinski graphs)

Grzegorz Rozenberg on Zoom Structures.

  • Interesting way of formalizing the process of knowledge management and acquistioon.
  • Could be used by people from say drug discovery and other data analytics

Alberto Leporati on membrane Computing and P-systems.

  • Result of action in a membrane is produced to the outside world only whne computation halts.
  • Question: what if the system is so distributed that we have no ability to guarantee the whole system halts? Can we have partial halts?
  • Catalysts can limit parallelism – sounds a bit like some sort of energy or power tokens

Maciej Koutny on Petri nets and Reaction Systems

  • We need not only prevent consumption (use of read arcs) but also prevent (inhibit!) production – something like “joint OR causality” or opportunistic merge can help.

 

New book on Carl Adam Petri and my chapter “Living Lattices” in it

A very nice new book “Carl Adam Petri: Ideas, Personality, Impact“, edited by Wolfgang Reisig and Grzegorz Rozenberg, has just been published by Springer:

https://link.springer.com/book/10.1007/978-3-319-96154-5

Newcastle professors, Brian Randell, Maciej Koutny and myself contributed articles for it.

An important aspect of those and other authors’ articles is that they mostly talk about WHY certain models and methods related to Petri nets have been investigated rather than describing the formalisms themselves. Basically, some 30-40 years of history are laid out on 4-5 pages of text and pictures.

My paper  “Living Lattices” provides a personal view of how Petri’s research inspired my own research, including comments on related topics such as lattices, Muller diagrams, complexity, concurrency, and persistence.

The chapter can be downloaded from here:

https://link.springer.com/chapter/10.1007/978-3-319-96154-5_28

There is also an interesting chapter by Jordi Cortadella “From Nets to Circuits and from Circuits to Nets”, which reviews the impact of Petri nets in one of the domains in which they have played a predominant role: asynchronous circuits. Jordi also discusses challenges and topics of interest for the future. This chapter can be downloaded from here:

https://link.springer.com/chapter/10.1007/978-3-319-96154-5_27

 

Superposing two levels of computing – via meta-materials!?

Computing is layered.

We have seen it in many guises.

(1) Compiling (i.e. executing the program synthesis) and executing a program

(2) Configuring the FPGA code and executing FPGA code

….

Some new avenues of multi-layered computing are coming with meta-materials.

On one level, we can have computing with potentially non-volatile states – for example, we can program materials by changing their most fundamental parameters, like epsilon (permittivity) and permeability). It is a configurational computing, which itself has certain dynamics. People who study materials and even devices, very rarely think about the dynamics of such state changes. They typically characterize them in static way – like I,V curves, hysteresis curves etc. What we need is to see more time domain characterization, such as waveforms, state graphs …

More standard computing is based on the stationary states of parameters. Whether analog or digital, this computing is often characterized in dynamic forms, and we can see timing and state diagrams, transients …

When these two forms of computing are combined, i.e. that the parameter changes add other degrees of freedom, we can have the two-level computing. This sort of layered computing is more and more what we need when we talk about machine learning and autonomous computing.

Meta-materials are a way to achieve that!

Ultra-ultra-wide-band Electro-Magnetic computing

I envisage a ‘mothball computer’ – a capsule with the case whose outer surface harvests power from the environment and inside the capsule we have the computational electronics.

High-speed clocking can be provided by EM of highest possible frequency – e.g. by visible light, X-rays or ultimately by gamma rays!

Power supply for modulation electronics can be generated by solar cells – Perovskite cells. Because Perovskite cell have lead in them they can insulate gamma rays from propagation outside the compute capsule.

Information will be in the form of time-modulated super-HF signals.

We will represent information in terms of time-averaged pulse bursts.

We will have a ‘continuum’ range of temporal compute which will operate in the range between deterministic one-shot pulse burst (discrete) through deterministic multi-pulse analog averaged signal to stochastic multi-pulse averaged signal (cf. book by Mars & Poppelbaum – https://www.amazon.co.uk/Stochastic-Deterministic-Averaging-Processes-electronics/dp/0906048443)

Temporal Computing (https://temporalcomputing.com) is the right kind of business opportunity for this Odyssey!

Switched electrical circuits as computing systems

We can define computations as processes of working of electrical circuits which are associated with sequences of (meaningful) events. Let’s take these events as discrete, i.e. something that can be enumerated with integer indices.

We can then map sequences of events onto integer numbers, or indices. Events can be associated with the facts of the system reaching certain states. Or, in a more distributed view, individual variables of the system, reaching certain states or levels. Another view is that a component in the system’s model moving from one state to another.

To mark such events and enable them we need sensory or actuating properties in the system. Why not simply consider an element called “switch”:

Switch = {ON if CTRL= ACTIVE, OFF if CTRL = PASSIVE}

What we want to achieve is to be able to express the evolution of physical variables as functions of event indices.

Examples of such computing processes are:

  • Discharging capacitance
  • Charging a (capacitive) transmission line
  • Switched cap converter
  • VCO based on inverter ring, modelled by switched parasitic caps.

The goal of modelling is to find a way of solving the behaviour of computational electrical circuits using “switching calculus” (similar to Heaviside’s “operational calculus” used to solev differential equations in an efficient way).

Leonid Rosenblum passes away …

Today In Miami at the age of 83 passed away a well known Russian and American automata theory scientist Leonid Rosenblum. He was my mentor and closest friend. Here is some brief information about his career. In Russian.

Леонид Яковлевич Розенблюм (5 марта 1936 г. – 2 апреля 2019 г.), канд. техн.наук, доцент – пионер мажоритарной логики, самосинхронной схемотехники, теории и применений сетей Петри в моделировании и проектировании цифровых схем и параллельных систем.В течение 20 лет, с 1960г. по 1980г., занимался с коллегами (в группе профессора В.И. Варшавского) наукой и приложениями (например, разработкой новой схемотехники и надежных бортовых компьютеров) в Вычислительном центре Ленинградского отделения Математического института им. В.А. Стеклова АН СССР.

С 1981г. по 1989 г. работал доцентом кафедры математического обеспечения и применения ЭВМ в ЛЭТИ им. В.И. Ульянова-Ленина (ныне Санкт-Петербургский государственный электротехнический университет). В 90-х годах после эмиграции в США работал адъюнкт-профессором в Бостонском университете, а также исследователем в Гарвардском университете.

Соавтор/автор пяти книг, около двух сотен различных изданий, учебных пособий, статей и обзоров, более 40 авторских свидетельств на изобретения.

Среди его учеников – профессора университетов России, Великобритании, США, Финляндии и других стран, сотрудники институтов АН Российской Федерации, таких как Институт Проблем Управления, а также известных отечественных и зарубежных компаний, таких как Intel, Cadence, Xilinx и т.д.

Леонида Яковлевича отличало врожденное свойство видеть в людях только положительные качества, помогать всем и во всем, и конечно необыкновенное чувство юмора. Эта утрата для огромного числа людей повсюду, всех кому посчастливилось его знать или слышать о нем.

Вечная память, дорогой Лека!

Leonid Yakovlevich Rosenblum (March 5, 1936 – April 2, 2019), Cand. Technical Sciences, Associate Professor – a pioneer of majority logic, self-timed circuit design, theory and applications of Petri nets in the modeling and design of digital circuits and parallel systems.

For 20 years, from 1960 to 1980, he worked with his colleagues (in the group of Professor VI Varshavsky) with science and applications (for example, developing new circuitry and reliable on-board computers) at the Computing Center of the Leningrad Branch of the Mathematical Institute. V.A. Steklov Academy of Sciences of the USSR.
From 1981 to 1989, he worked as an associate professor at the Department of Software and Computer Applications at LETI named after Ulyanov-Lenin  (now St. Petersburg State Electrotechnical University). In the 90s, after emigration to the United States, he worked as an adjunct professor at Boston University, as well as a researcher at Harvard University.
Co-author / author of five books, about two hundred different publications, textbooks, articles and reviews, more than 40 certificates of authorship for inventions.

Among his students are professors from universities in Russia, the United Kingdom, the United States, Finland and other countries, employees of institutes of the Academy of Sciences of the Russian Federation, such as the Institute of Management Problems, as well as well-known domestic and foreign companies such as Intel, Cadence, Xilinx, etc.

Leonid Yakovlevich was distinguished by the innate ability to see in people only positive qualities, to help everyone and in everything, and of course an extraordinary sense of humor. This is a great loss for a huge number of people everywhere, all who were lucky enough to know or hear about him.
Rest in peace, dear Leo!

 

My paper “Energy current and computing” is online

Theme issue of the Royal Society Philosophical Transactions A

Celebrating 125 years of Oliver Heaviside’s ‘Electromagnetic Theory’ compiled and edited by Christopher Donaghy-Spargo and Alex Yakovlev is now online:

http://rsta.royalsocietypublishing.org/content/376/2134

My paper “Energy current and computing” is here:

http://rsta.royalsocietypublishing.org/content/376/2134/20170449

 

My PhD Thesis (1982) – scanned copy in pdf

I have finally managed to scan my PhD thesis “Design and Implementation of Asynchronous Communication Protocols in Systems Interfaces” in Russian (“Проектирование и реализация протоколов асинхронного обмена информацией в межмодульном интерфейсе”)

The thesis is spread between several files (total – 255 pages):

Title, Contents and Introduction:

Chapter 1 (General characterization of the methods of formal synthesis and analysis of communication protocols): 

Chapter 2 (Formalization of the behaviour of interacting objects and communication protocols):

Chapter 3 (Interpretation of asynchronous processes and use of interpreted models for the description and analysis of protocols):

Chapter 4 (Organization of aperiodic interface of intermodular communication):

Conclusion and References:

Appendinces (1-5):

(1) Example of context procedure

(2) Example of controlled protocol

(3) Application of Petri nets to specification of asynchronous discrete structures

(4) Information transfer on three-state lines

(5) Analysis and implementation of the TRIMOSBUS interface

Exploitation confirmation letter from Ufa plant

 

Real Nature’s proportionality is geometric: Newton’s causality

I recently enjoyed e-mail exchanges with Ed Dellian.

Ed is one of the very few modern philosophers and science historians who read Newton’s Principia in original (and produced his own translation of Principia to German – published in 1988).

Ed’s position is that the real physical (Nature’s) laws reflect cause and effect in the form of geometric proportionality. The most fundamental being E/p=c, where E is energy, p is momentum and c is velocity – a proportionality coefficient, i.e. a constant associated with space over time.  This view is in line with the Poynting vector understanding of electromagnetism, also accepted by Heaviside in his notion of ‘energy current’. It even is the basis of Einstein’s E/mc = c.

The diversion from geometric proportionality towards arithmetic proportionality was due to Leibniz and his principle of “causa aequat effectum“. According to Ed (I am quoting him here)  – “it is a principle that has nothing to do with reality, since it implies “instantanity” of interaction, that is, interaction independently of “real space” and “real time”, conflicting with the age-old natural experience expressed by Galileo that “nothing happens but in space and time” “. It is therefore important to see how Maxwellian electromagnetism is seen by scholars. For example, Faraday’s law states an equivalence of EMF and the rate of change of magnetic flux – it is not a geometric proportion, hence it is not causal!

My view, which is based on my experience with electronic circuits and my understanding of causality between and energy and information transfer (state-changes), where energy is cause and information transfer is effect, is in agreement with geometric proportionality. Energy causes state-transitions in space-time. This is what I call energy-modulated computing. It is challenging to refine this proportionality in every real problem case!

If you want to know more about Ed Dellian’s views, I recommend visiting his site http://www.neutonus-reformatus.de  which contains several interesting papers.

 

 

 

 

A causes B – what does it mean?

There is a debatable issue that concerns the presence of causality in the interpretation of some physical relationships, such as those involved in electromagnetism. For example, “the dynamic change in magnetic field H causes the emergence of electric field E”. This is a common interpretation of one of the key Maxwell’s equations (originating in Faraday’s law). What does this “causes” mean? Is the meaning purely mathematical or is it more fundamental, or physical?

First of all, any man-made statements about real world phenomena are not strictly speaking physical, because they are formulated by humans within their perceptions, or, whether we want it or not, models, of the real world. So, even if we use English to express our perceptions we already depart from the “real physics”. Mathematics is just a man-made form of expression that is underpinned by some mathematical rigour.

Now let’s get back to the interpretation of the “causes” (or causality) relations. It is often synonymized  with the “gives rise” relation. Such relations present a lot of confusion if they originate from the interpretation of mathematical equations. For example, Faraday’s law in mathematical form, curl (E) = – dB/dt,  does not say anything about the RHS causing or giving rise to the LHS. (Recall that B is proportional to H with the permeability of the medium being the coefficient of proportionality.)

The interpretation problem, when taken outside pure mathematics leads to the question, for example, of HOW QUICKLY the RHS causes the LHS? And, here we have no firm answer. The question of “how quickly does the cause have an effect” is very much physical (yet neither Faraday nor Maxwell state anything about it!), because we are used to think that if A causes B, then we imply some temporal precedence between the event associated with A and the event associated with B. We also know that it is unlikely that this ‘causal precedence’ will effect faster than the speed of light (we haven’t seen any other evidence of information signalling acting faster than the speed of light!). Hence, the causality with the speed of light is something that may be the result of our causal interpretation. But, then this is probably wrong to assume that Faraday or Maxwell gave this sort of interpretation to the above relationship.

Worth thinking about causality, isn’t it?

I have no clear answer, but in my opinion, reading the original materials on electromagnetic theory, such as Heaviside’s volumes, rather than modern textbooks would be a good recipe!

I recommend anyone interested in this debatable matter check out Ivor Catt’s view on it:

http://www.ivorcatt.co.uk/x18j73.pdf

http://www.ivorcatt.co.uk/x18j184.pdf

To the best of my knowledge, Catt was the first to have noticed and wrote about the fact that modern texts on electromagnetism actively use the ’causes’ interpretation of Maxwell’s equations. He also claims that such equations are “obvious truisms about any body or material moving in space”.  The debatable matter may then start to move from the question of the legitimacy of the causal interpretation of these equations towards the question of how useful these equations are for actual understanding of electromagnetism …