My talk at the 2nd Workshop on Reaction Systems

Following the 1st School on Reaction Systems in Torun, Poland, there was the 2nd Workshop on Reaction Systems, also held in Torun.

The workshop programme is listed here:

http://wors2019.mat.umk.pl//workshop/

I gave a talk on “Bringing Asynchrony to Reaction Systems”. This talk was work in (pre-)progress. Mostly developed during the Reaction Systems week in Torun.

The abstract of my talk is below:

Reaction systems have a number of underlining principles that govern them in their operation. They are: (i) maximum concurrency, (ii) complete renewal of state (no permanency), (iii) both promotion and inhibition, (iv) 0/1 (binary) resource availability, (v) no contention between resources. Most of these principles could be seen as constraints in a traditional asynchronous behaviour setting. However, under a certain viewpoint these principles do not contradict to principles underpinning asynchronous circuits if the latter were considered at an appropriate level of abstraction. Asynchrony typically allows enabled actions to execute in either order, retains the state of enabled actions while other actions are executed, involves fine grained causality between elementary events and permits arbitration for shared resources. This talk will discuss some of these potential controversies and attempt to show ways of resolving them and thereby bringing asynchrony into the realm of reaction systems. Besides that, we will also look at how the paradigm of reaction systems can be exploited in designing concurrent electronic systems.

The slides of my talk are here

 

My lecture on Asynchronous Computation at the 1st School on Reaction Systems

The 1st School on Reaction Systems has taken place in historical Toruń, Poland.

Organised by Dr Lukasz Mikulski and Prof Grzegorz Rozenberg at the Nicolaus Copernicus University.

I managed to attend a number of lectures and gave my own lecture on Asynchronous Computation (from the perspective of electronic designer).

Here are the slides:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Torun-Yakovlev-lecture-final.pdf

Ideas picked up at the 1st School on Reaction Systems in Torun, Poland

Grzegorz Rozenberg’s lecture on Modularity and looking inside the reaction system states.

  • Some subsets of reactants will be physical – they form modules.
  • Stability implies lattice: a state transition is locally stable if the subsets (modules) in the states are isomorphic. These subset structures form partial order, so we have an isomorphism between partial orders. So, structurally, nothing really changes during those transitions – nothing new!
  • Biologists call this “adulthood”. It would be nice to have completion detection for that class of equivalence!

Paolo Milazzo’s talk (via Skype) on Genetic Regulatory Networks.

  • Some methods exist in gene regulation for saving energy – say by using lactose (as some sort of inhibitor)
  • He talked about sync/async Boolean networks of regulatory gene networks.

Paolo Bottoni on Networks of Reaction Systems.

  • Basic model – Environment influences the reaction systems
  • Here we consider reaction systems influences the environment

Robert Brijder on Chemical Reaction Networks.

Hans-Joerg Kreowski on Reaction Systems on Graphs.

  • Interesting graph transformations as reaction systems.
  • Examples involved some graph growth (e.g. fractal such as Serpinski graphs)

Grzegorz Rozenberg on Zoom Structures.

  • Interesting way of formalizing the process of knowledge management and acquistioon.
  • Could be used by people from say drug discovery and other data analytics

Alberto Leporati on membrane Computing and P-systems.

  • Result of action in a membrane is produced to the outside world only whne computation halts.
  • Question: what if the system is so distributed that we have no ability to guarantee the whole system halts? Can we have partial halts?
  • Catalysts can limit parallelism – sounds a bit like some sort of energy or power tokens

Maciej Koutny on Petri nets and Reaction Systems

  • We need not only prevent consumption (use of read arcs) but also prevent (inhibit!) production – something like “joint OR causality” or opportunistic merge can help.

 

New book on Carl Adam Petri and my chapter “Living Lattices” in it

A very nice new book “Carl Adam Petri: Ideas, Personality, Impact“, edited by Wolfgang Reisig and Grzegorz Rozenberg, has just been published by Springer:

https://link.springer.com/book/10.1007/978-3-319-96154-5

Newcastle professors, Brian Randell, Maciej Koutny and myself contributed articles for it.

An important aspect of those and other authors’ articles is that they mostly talk about WHY certain models and methods related to Petri nets have been investigated rather than describing the formalisms themselves. Basically, some 30-40 years of history are laid out on 4-5 pages of text and pictures.

My paper  “Living Lattices” provides a personal view of how Petri’s research inspired my own research, including comments on related topics such as lattices, Muller diagrams, complexity, concurrency, and persistence.

The chapter can be downloaded from here:

https://link.springer.com/chapter/10.1007/978-3-319-96154-5_28

There is also an interesting chapter by Jordi Cortadella “From Nets to Circuits and from Circuits to Nets”, which reviews the impact of Petri nets in one of the domains in which they have played a predominant role: asynchronous circuits. Jordi also discusses challenges and topics of interest for the future. This chapter can be downloaded from here:

https://link.springer.com/chapter/10.1007/978-3-319-96154-5_27

 

Superposing two levels of computing – via meta-materials!?

Computing is layered.

We have seen it in many guises.

(1) Compiling (i.e. executing the program synthesis) and executing a program

(2) Configuring the FPGA code and executing FPGA code

….

Some new avenues of multi-layered computing are coming with meta-materials.

On one level, we can have computing with potentially non-volatile states – for example, we can program materials by changing their most fundamental parameters, like epsilon (permittivity) and permeability). It is a configurational computing, which itself has certain dynamics. People who study materials and even devices, very rarely think about the dynamics of such state changes. They typically characterize them in static way – like I,V curves, hysteresis curves etc. What we need is to see more time domain characterization, such as waveforms, state graphs …

More standard computing is based on the stationary states of parameters. Whether analog or digital, this computing is often characterized in dynamic forms, and we can see timing and state diagrams, transients …

When these two forms of computing are combined, i.e. that the parameter changes add other degrees of freedom, we can have the two-level computing. This sort of layered computing is more and more what we need when we talk about machine learning and autonomous computing.

Meta-materials are a way to achieve that!

Ultra-ultra-wide-band Electro-Magnetic computing

I envisage a ‘mothball computer’ – a capsule with the case whose outer surface harvests power from the environment and inside the capsule we have the computational electronics.

High-speed clocking can be provided by EM of highest possible frequency – e.g. by visible light, X-rays or ultimately by gamma rays!

Power supply for modulation electronics can be generated by solar cells – Perovskite cells. Because Perovskite cell have lead in them they can insulate gamma rays from propagation outside the compute capsule.

Information will be in the form of time-modulated super-HF signals.

We will represent information in terms of time-averaged pulse bursts.

We will have a ‘continuum’ range of temporal compute which will operate in the range between deterministic one-shot pulse burst (discrete) through deterministic multi-pulse analog averaged signal to stochastic multi-pulse averaged signal (cf. book by Mars & Poppelbaum – https://www.amazon.co.uk/Stochastic-Deterministic-Averaging-Processes-electronics/dp/0906048443)

Temporal Computing (https://temporalcomputing.com) is the right kind of business opportunity for this Odyssey!

Switched electrical circuits as computing systems

We can define computations as processes of working of electrical circuits which are associated with sequences of (meaningful) events. Let’s take these events as discrete, i.e. something that can be enumerated with integer indices.

We can then map sequences of events onto integer numbers, or indices. Events can be associated with the facts of the system reaching certain states. Or, in a more distributed view, individual variables of the system, reaching certain states or levels. Another view is that a component in the system’s model moving from one state to another.

To mark such events and enable them we need sensory or actuating properties in the system. Why not simply consider an element called “switch”:

Switch = {ON if CTRL= ACTIVE, OFF if CTRL = PASSIVE}

What we want to achieve is to be able to express the evolution of physical variables as functions of event indices.

Examples of such computing processes are:

  • Discharging capacitance
  • Charging a (capacitive) transmission line
  • Switched cap converter
  • VCO based on inverter ring, modelled by switched parasitic caps.

The goal of modelling is to find a way of solving the behaviour of computational electrical circuits using “switching calculus” (similar to Heaviside’s “operational calculus” used to solev differential equations in an efficient way).

Some of Leonid Rosenblum’s works

L. Ya. Rosenblum and A.V. Yakovlev.
Signal graphs: from self-timed to timed ones,
Proc. of the Int. Workshop on Timed Petri Nets,
Torino, Italy, July 1985, IEEE Computer Society Press, NY, 1985, pp. 199-207.

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/LR-AY-TPN85.pdf

A paper establishing interesting relationship between the interleaving and true causality semantics
using algebraic lattices. It also identifies an connection between the classes of lattices and the property
of generalisability of concurrency relations (from arity N to arity N+1),
i.e. the conditions for answering the question such as,
if three actions A, B and C are all pairwise concurrent, i.e. ||(A,B), ||(A,C), and ||(B,C), are they concurrent “in three”, i.e. ||(A,B,C)?
L. Rosenblum, A. Yakovlev, and V. Yakovlev.
A look at concurrency semantics through “lattice glasses”.
In Bulletin of the EATCS (European Association for Theoretical Computer Science), volume 37, pages 175-180, 1989.

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/lattices-Bul-EATCS-37-Feb-1989.pdf

Paper about the so called symbolic STGs, in which signals can have multiple values (which is often convenient for specifications of control at a more abstract level than dealing with binary signals) and hence in order to implement them in logic gates one needs to solve the problem of binary expansion or encoding, as well as resolve all the state coding issues on the way of synthesis of circuit implementation.

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/async-des-methods-Manchester-1993-SymbSTG-yakovlev.pdf

Paper about analysing concurrency semantics using relation-based approach. Similar techniques are now being developed in the domain of business process modelling and work-flow analysis: L.Ya. Rosenblum and A.V. Yakovlev. Analysing semantics of concurrent hardware specifications. Proc. Int. Conf. on Parallel Processing (ICPP89), Pennstate University Press, University Park, PA, July 1989, pp. 211-218, Vol.3

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/LR-AY-ICPP89.pdf

Моделирование параллельных процессов. Сети Петри [Текст] : курс для системных архитекторов, программистов, системных аналитиков, проектировщиков сложных систем управления / Мараховский В. Б., Розенблюм Л. Я., Яковлев А. В. – Санкт-Петербург : Профессиональная литература, 2014. – 398 с. : ил., табл.; 24 см. – (Серия “Избранное Computer Science”).; ISBN 978-5-9905552-0-4
(Серия “Избранное Computer Science”)

https://www.researchgate.net/…/Simulation-of-Concurrent-Processes-Petri-Nets.pdf

Leonid Rosenblum passes away …

Today In Miami at the age of 83 passed away a well known Russian and American automata theory scientist Leonid Rosenblum. He was my mentor and closest friend. Here is some brief information about his career. In Russian.

Леонид Яковлевич Розенблюм (5 марта 1936 г. – 2 апреля 2019 г.), канд. техн.наук, доцент – пионер мажоритарной логики, самосинхронной схемотехники, теории и применений сетей Петри в моделировании и проектировании цифровых схем и параллельных систем.В течение 20 лет, с 1960г. по 1980г., занимался с коллегами (в группе профессора В.И. Варшавского) наукой и приложениями (например, разработкой новой схемотехники и надежных бортовых компьютеров) в Вычислительном центре Ленинградского отделения Математического института им. В.А. Стеклова АН СССР.

С 1981г. по 1989 г. работал доцентом кафедры математического обеспечения и применения ЭВМ в ЛЭТИ им. В.И. Ульянова-Ленина (ныне Санкт-Петербургский государственный электротехнический университет). В 90-х годах после эмиграции в США работал адъюнкт-профессором в Бостонском университете, а также исследователем в Гарвардском университете.

Соавтор/автор пяти книг, около двух сотен различных изданий, учебных пособий, статей и обзоров, более 40 авторских свидетельств на изобретения.

Среди его учеников – профессора университетов России, Великобритании, США, Финляндии и других стран, сотрудники институтов АН Российской Федерации, таких как Институт Проблем Управления, а также известных отечественных и зарубежных компаний, таких как Intel, Cadence, Xilinx и т.д.

Леонида Яковлевича отличало врожденное свойство видеть в людях только положительные качества, помогать всем и во всем, и конечно необыкновенное чувство юмора. Эта утрата для огромного числа людей повсюду, всех кому посчастливилось его знать или слышать о нем.

Вечная память, дорогой Лека!

Leonid Yakovlevich Rosenblum (March 5, 1936 – April 2, 2019), Cand. Technical Sciences, Associate Professor – a pioneer of majority logic, self-timed circuit design, theory and applications of Petri nets in the modeling and design of digital circuits and parallel systems.

For 20 years, from 1960 to 1980, he worked with his colleagues (in the group of Professor VI Varshavsky) with science and applications (for example, developing new circuitry and reliable on-board computers) at the Computing Center of the Leningrad Branch of the Mathematical Institute. V.A. Steklov Academy of Sciences of the USSR.
From 1981 to 1989, he worked as an associate professor at the Department of Software and Computer Applications at LETI named after Ulyanov-Lenin  (now St. Petersburg State Electrotechnical University). In the 90s, after emigration to the United States, he worked as an adjunct professor at Boston University, as well as a researcher at Harvard University.
Co-author / author of five books, about two hundred different publications, textbooks, articles and reviews, more than 40 certificates of authorship for inventions.

Among his students are professors from universities in Russia, the United Kingdom, the United States, Finland and other countries, employees of institutes of the Academy of Sciences of the Russian Federation, such as the Institute of Management Problems, as well as well-known domestic and foreign companies such as Intel, Cadence, Xilinx, etc.

Leonid Yakovlevich was distinguished by the innate ability to see in people only positive qualities, to help everyone and in everything, and of course an extraordinary sense of humor. This is a great loss for a huge number of people everywhere, all who were lucky enough to know or hear about him.
Rest in peace, dear Leo!

 

On “Quantum LC circuit paradox”

One of my younger friends and co-authors, Alex Kushnerov, has just pointed out to me the following statement:

“So, there are no electric or magnetic charges in the quantum LC circuit, but electric and magnetic fluxes only….”

It is made on the following website:

https://en.m.wikipedia.org/wiki/Quantum_LC_circuit

It seems that for ‘classical theorists’ in EM and Quantum Mechanics, this effect forms a paradox, which they call “Quantum LC circuit paradox”.

Presumably, if they started with energy current in the first place, which has nothing to do with charges or currents, and then simply capture energy current in spatial forms, that manifest themselves as “capacitors” or “inductors”, they would quantize it quite comfortably in a normal deterministic and causal sense. Thus they would have the effects of LC without any necessity to go to special ‘quantum LC’.

I wrote about these ideas in my Royal Society Phil Trans paper:

Energy current and computing

And in my earlier blogs …, e.g. https://blogs.ncl.ac.uk/alexyakovlev/2014/10/

And, most importantly, that’s what Ivor Catt and his Catt Theory of EM have been trying to tell the rest of the world for more than half a century:

http://www.ivorcatt.co.uk

Talking at the AI Workshop held at the Center for AI Research (CAIR) at University of Agder, Norway

I was invited to University of Agder, in the South of Norway (in a nice town called Grimstad, famous for the presence of Henrik Ibsen and Knut Hamsun), to present my vision on what kind of hardware do we need for pervasive AI. This presentation was part of a workshop organised by Prof Ole-Christoffer Granmo, Director of CAIR, on the occasion of the grant opening of CAIR – https://cair.uia.no

In my presentation I emphasized the following points:

  • Pervasive Intelligence requires reconsidering many balances:
    – Between software and hardware
    – Between power and compute
    – Between analogand digital
    – Between design and fabrication and maintenance
  • Granulation phenomenon: Granularity of power, time, data and function
  • Main research questions:
    – Can we granulate intelligence to minimum?
    – What is the smallest level at which we can make cyber-systems learn in terms of power, time, data and function?
  • Grand challenge for pervasive hardware AI:
    To enable electronic components with an ability to learn and compute in real-life environments with real-power and in real-time
  • Research Hypothesis:
    We should design systems that are energy-modulated and self-timed, with maximally distributed learning capabilities

I put a strong hypothesis on the role of using Tsetlin Automata (Automata with Linear Tactics) for building electronics with high-granularity learning capabilities.

The key elements of the proposed approach are:

  • Event-driven, robust to power and timing fluctuations
  • Decentralised TsetlinAutomata (TAs) for learning on demand
  • Mixed digital-analogcompute where elements are enabled and controlled by individual TAs
  • Natural approximation in its nature, both in learning and compute
  • Asynchronous logic for h/w implementation

The full set of my slides is here: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/AlexYakovlev-AI%20Hardware-070219.version3.pdf