Energy-vector, momentum, causality, Energy-scalar …

Some more interesting discussions with Ed Dellian has resulted in this ‘summary’, made in context with my current level of understanding of Catt Theory of electromagnetism):

  1. Energy current (E-vector) causes momentum p.
  2. Causality is made via the proportionality coefficient c (speed of energy current)
  3. Momentum p is what mediates between E-vector and changes in the matter.
  4. Momentum p is preserved as energy current hits the matter.
  5. Momentum in the matter presents another form of energy (E-scalar).
  6. E-scalar characterises the elements of the matter as they move with a (material) velocity.
  7. As elements of the matter move they cause changes in Energy current (E-vector) and this forms a fundamental feedback mechanism (which is recursive/fractal …).

Telling this in terms of EM theory and electricity:

  • E-vector (Poynting vector aka Heaviside signal) causes E-scalar (electric current in the matter).
  • This causality between E-vector and E-scalar is mediated by momentum p causing the motion of charges.
  • The motion of charges with material velocity causes changes in E-vector, i.e. the feedback effect mentioned above (e.g. self-induction)

I’d be most grateful if someone refutes these items and bullets.

I also recommend to read my blog (from 2014) on discretisation

On Quantisation and Discretisation of Electromagnetic Effects in Nature

Real Nature’s proportionality is geometric: Newton’s causality

I recently enjoyed e-mail exchanges with Ed Dellian.

Ed is one of the very few modern philosophers and science historians who read Newton’s Principia in original (and produced his own translation of Principia to German – published in 1988).

Ed’s position is that the real physical (Nature’s) laws reflect cause and effect in the form of geometric proportionality. The most fundamental being E/p=c, where E is energy, p is momentum and c is velocity – a proportionality coefficient, i.e. a constant associated with space over time.  This view is in line with the Poynting vector understanding of electromagnetism, also accepted by Heaviside in his notion of ‘energy current’. It even is the basis of Einstein’s E/mc = c.

The diversion from geometric proportionality towards arithmetic proportionality was due to Leibniz and his principle of “causa aequat effectum“. According to Ed (I am quoting him here)  – “it is a principle that has nothing to do with reality, since it implies “instantanity” of interaction, that is, interaction independently of “real space” and “real time”, conflicting with the age-old natural experience expressed by Galileo that “nothing happens but in space and time” “. It is therefore important to see how Maxwellian electromagnetism is seen by scholars. For example, Faraday’s law states an equivalence of EMF and the rate of change of magnetic flux – it is not a geometric proportion, hence it is not causal!

My view, which is based on my experience with electronic circuits and my understanding of causality between and energy and information transfer (state-changes), where energy is cause and information transfer is effect, is in agreement with geometric proportionality. Energy causes state-transitions in space-time. This is what I call energy-modulated computing. It is challenging to refine this proportionality in every real problem case!

If you want to know more about Ed Dellian’s views, I recommend visiting his site http://www.neutonus-reformatus.de  which contains several interesting papers.

 

 

 

 

A causes B – what does it mean?

There is a debatable issue that concerns the presence of causality in the interpretation of some physical relationships, such as those involved in electromagnetism. For example, “the dynamic change in magnetic field H causes the emergence of electric field E”. This is a common interpretation of one of the key Maxwell’s equations (originating in Faraday’s law). What does this “causes” mean? Is the meaning purely mathematical or is it more fundamental, or physical?

First of all, any man-made statements about real world phenomena are not strictly speaking physical, because they are formulated by humans within their perceptions, or, whether we want it or not, models, of the real world. So, even if we use English to express our perceptions we already depart from the “real physics”. Mathematics is just a man-made form of expression that is underpinned by some mathematical rigour.

Now let’s get back to the interpretation of the “causes” (or causality) relations. It is often synonymized  with the “gives rise” relation. Such relations present a lot of confusion if they originate from the interpretation of mathematical equations. For example, Faraday’s law in mathematical form, curl (E) = – dB/dt,  does not say anything about the RHS causing or giving rise to the LHS. (Recall that B is proportional to H with the permeability of the medium being the coefficient of proportionality.)

The interpretation problem, when taken outside pure mathematics leads to the question, for example, of HOW QUICKLY the RHS causes the LHS? And, here we have no firm answer. The question of “how quickly does the cause have an effect” is very much physical (yet neither Faraday nor Maxwell state anything about it!), because we are used to think that if A causes B, then we imply some temporal precedence between the event associated with A and the event associated with B. We also know that it is unlikely that this ‘causal precedence’ will effect faster than the speed of light (we haven’t seen any other evidence of information signalling acting faster than the speed of light!). Hence, the causality with the speed of light is something that may be the result of our causal interpretation. But, then this is probably wrong to assume that Faraday or Maxwell gave this sort of interpretation to the above relationship.

Worth thinking about causality, isn’t it?

I have no clear answer, but in my opinion, reading the original materials on electromagnetic theory, such as Heaviside’s volumes, rather than modern textbooks would be a good recipe!

I recommend anyone interested in this debatable matter check out Ivor Catt’s view on it:

http://www.ivorcatt.co.uk/x18j73.pdf

http://www.ivorcatt.co.uk/x18j184.pdf

To the best of my knowledge, Catt was the first to have noticed and wrote about the fact that modern texts on electromagnetism actively use the ’causes’ interpretation of Maxwell’s equations. He also claims that such equations are “obvious truisms about any body or material moving in space”.  The debatable matter may then start to move from the question of the legitimacy of the causal interpretation of these equations towards the question of how useful these equations are for actual understanding of electromagnetism …

 

 

 

 

On “Свой – Чужой” (Friend – Foe) paradigm and can we do as good as Nature?

I recently discovered that there is no accurate linguistic translation of the words “Свой” and “Чужой” from Russian to English. A purely semantical translation of “Свой” as “Friend” and  “Чужой” as “Foe” will only be correct in this particular paired context of “Свой – Чужой” as “Friend – Foe”, which sometimes delivers the same idea as “Us – Them”. I am sure there are many idioms that are also translated as the “whole dish” rather than by ingredients.

Anyway, I am not going to discuss here linguistic deficiencies of languages.

I’d rather talk about the concept or paradigm of “Свой – Чужой”, or equally “Friend – Foe”, that we can observe in Nature as a way of enabling living organisms to survive as species through many generations. WHY, for example, one particular species does not produce off-spring as a result of mating with another species? I am sure geneticists would have some “unquestionable’’ answers to this question. But, probably those answers will either be too trivial that they wouldn’t trigger any further interesting technological ideas, or too involved that they’d require studying this subject at length before seeing any connections with non-genetic engineering.  Can we hypothesize about this “Big WHY” by looking at the analogies in technology?

Of course another question crops up as why that particular WHY is interesting and maybe of some use to us engineers.

Well, one particular form of usefulness can be in trying to imitate this “Friend – Foe” paradigm in information processing systems to make them more secure. Basically, what we want to achieve is that if a particular activity has a certain “unique stamp of a kind’’ it can only interact safely and produce meaningful results with another activity of the same kind. As activities or their products lead to other activities we can think of some form of inheritance of the kind, as well as evolution in the form of creating a new kind with another “unique stamp of that kind”.

Look at this process as the physical process driven by energy. Energy enables the production of the offspring actions/data from the actions/data of the similar kind (Friends leading to Friends) or of the new kind, which is again protected from intrusion by the actions/data of others or Foes.

My conjecture is that the DNA mechanisms in Nature underpin this “Friend – Foe” paradigm by applying unique identifiers or DNA keys. In the world of information systems we generate keys (by prime generators and filters to separate them from the already used primes) and use encryption mechanisms. I guess that the future of electronic trading, if we want it to be survivable, is in making available energy flows generate masses of such unique keys and stamp our actions/data in their propagation.

Blockchains are probably already using this “Свой – Чужой” paradigm, do they? I am curious how mother Nature manages to generate these new DNA keys and not run out of energy. Probably there is a hidden reuse there? There should be balance between complexity and productivity somewhere.

IoT Technology Market 2017-2022 Prognosis

Quoting the recent Research and Markets store:

“Internet of Things Technology Market by Node Component (Processor, Sensor, Connectivity IC, Memory Device, and Logic Device), Network Infrastructure, Software Solution, Platform, Service, End-use Application and Geography – Global Forecast to 2022”

https://www.researchandmarkets.com/research/hld477/internet_of

IoT technology market expected to grow at a CAGR of 25.1% during the forecast period”

“The IoT technology market is expected to be valued at USD 639.74 billion by 2022, growing at a CAGR of 25.1% from 2017 to 2022. The growth of the IoT technology market can be attributed to the growing market of connected devices and increasing investments in the IoT industry. However, the lack of common communication protocols and communication standards across platforms, and high-power consumption by connected devices are hindering the growth of the IoT technology market.”

Our paper on Performance-Energy-Reliability interplay in Multi-core Scaling

Fei Xia, Ashur Rafiev, Ali Aalsaud, Mohammed Al-Hayanni, James Davis, Joshua Levine, Andrey Mokhov, Alexander Romanovsky, Rishad Shafik, Alex Yakovlev, Sheng Yang,  “Voltage, Throughput, Power, Reliability, and Multicore Scaling”, Computer, vol. 50, no. , pp. 34-45, August 2017, doi:10.1109/MC.2017.3001246

http://publications.computer.org/computer-magazine/2017/08/08/voltage-throughput-power-reliability-and-multicore-scaling/

This article studies the interplay between the performance, energy, and reliability (PER) of parallel-computing systems. It describes methods supporting the meaningful cross-platform analysis of this interplay. These methods lead to the PER software tool, which helps designers analyze, compare, and explore these properties. The web extra at https://youtu.be/aijVMM3Klfc illustrates the PER (performance, energy, and reliability) tool, expanding on the main engineering principles described in the article.

The PER tool can be found here:

www.async.org.uk/prime/PER/per.html

Open access paper version is here:

http://eprint.ncl.ac.uk/file_store/production/231220/F814D1A8-84ED-4996-A2C1-6BD3763E6456.pdf

my talk at Hardware Design and Theory Workshop in Vienna – October 2017

I gave a talk on How to Design Little Digital, yet Highly Concurrent Electronics?

at the Hardware Design and Theory Workshop in Vienna – October 2017

https://www.mpi-inf.mpg.de/departments/algorithms-complexity/hdt2017/

The workshop was part of the  International conference on Distributed Compiting (DISC 2017)

http://www.disc-conference.org/wp/disc2017/

My presentation can be found here:

https://www.mpi-inf.mpg.de/fileadmin/inf/d1/HDT2017/DISC-HW-workshop-AY.pdf

 

Real Power stuff in ARM Research Summit 2017

There have been several presentations about Real Power Computing at the last ARM Research Summit held on 11-13th September 2017 in Cambridge (Robinson College):

The full agenda of the summit is here:

https://developer.arm.com/research/summit/agenda

The videos of the talks can be found here:

http://www.arm.com/summit/live

It is possible to navigate to the right video by selecting the Webcam by the name of the room where that session was scheduled in the Agenda.

The most relevant talk was our talk on Real Power Computing, given by Rishad Shafik. it is listed under my name on Monday 11th Sept at 9:00.

Other relevant talks were by Geoff Merrett and Bernard Stark, in the same session, and by Kerstin Eder on Tuesday 12th at 9:00.

 

Real-Power Computing: Basics

What is Real-Power Computing?

RP Computing is a discipline of designing computer systems, in hardware and software, which operate under definite power or energy constraints. These constraints are formed from the requirements of applications, i.e. known at the time of designing or programming these systems or obtained from the real operating conditions, i.e. at run time. These constrains can be associated with limited sources of energy supplied to the computer systems as well as with bounds on dissipation of energy by computer systems.

Applications

These define areas of computing where power and energy require rationing in making systems perform their functions.

Different ways of categorising applications can be used. One possible way is to classify application based on different power ranges, such as microWatts, milliWatts etc.

Another way would be to consider application domains, such as bio-medical, internet of things, automotive systems etc.

Paradigms

These define typical scenarios where power and energy constraints are considered and put into interplay with functionalities. These scenarios define modes, i.e. sets of constraints and optimisation criteria. Here we look at the main paradigms of using power and energy on the roads.

Power-driven: Starting on bicycle or car from stationary state as we go from low gears to high gears. Low gears allow the system to reach certain speed with minimum power.

Energy-driven: Steady driving on a motorway, where we could maximise our distance for a given amount of fuel.

Time-driven: Steady driving on a motorway where we minimise the time to reach the destination and fit the speed-limit regulations.

Hybrid: Combinations of power and energy-driven scenarios, i.e. like in PI (D) control.

Similar categories could be defined for budgeting cash in families, depending on the salary payment regimes and living needs. Another source of examples could be the funding modes for companies at different stages of their development.

Architectural considerations

These define elements, parameters and characteristics of system design that help meeting the constraints and optimisation targets associated with the paradigms. Some of them can be defined at design (programming and compile) time while some defined at run-time and would require monitors and controls.