The cacophony of particularities in Maxwell’s equations … at the end of the day, it’s only Catt’s Heaviside signal that puts things right!

Over the last couple of week I have been witnessing an interesting email discussion about Maxwell’s equations between 2-3 people trying to come to terms with the difficulty of accommodating the notion of displacement current in free space and ‘sorting out’ the Ampere’s law. The latter combines, for whatever reason, both the elements of propagating field (not requiring charged particles as this field can propagate without involving massed matter) and current density (implying the existence of massed particles).

I have drawn my own conclusions out of this discussion, which ended up with conclusions that the above mentioned difficulty cannot be easily resolved with the bounds the temple of the classical electromagnetics with its holy book of Maxwell’s laws.

Here are my comments on this:

To Ivor Catt:

Following your theory where the Heaviside signal travels (and can only do so) with the speed of light in the medium, such a speed is entirely determined by epsilon and mu. Thus, where we have an interface between very low epsilon dielectric and very high epsilon metal, from the point of view of energy current, we have the effect similar to friction (against the metal surface – like a rotating wheel goes forward on the ground thanks to experiencing friction against the ground). And thanks to this “friction” it prefers to trolley along the metal wire, or between the metal plates of the capacitor.

To David Tombe:

Catt’s theory works at a different level of abstraction. This is the level of fundamental energy current. This level underpins “charged particles”. The latter are the result of the ExH energy current trapped in corresponding sections of space. What’s important is that that trapped energy never stops inside those particles as it can only exist in the form of ExH slabs moving with speed of light in the epsilon-mu medium. So then, when you apply energy current travelling outside those particles, there is an interesting interaction with the energy current inside those particles.

The entire world is filled with energy current fractally sectioned into fragments determined by space sections. 

All I can say is that in my opinion you misunderstand the domain of action of Catt’s theory. It does not consider static electric field. That’s it. There’s no such a thing as static EM energy. It can only move at speed c=dx/dt, in all directions. 

And this energy fills up space according to its epsilon/mu properties.

There’s no need for Maxwells equations to be involved in Catt’s theory. All these equations are partial, like Greek gods.

To Ivor Catt:

I, perhaps surreptitiously,  was awaiting for your email, either in public or in private.

Coincidentally, about an hour ago, I typed a message intended to be sent to the whole list from that discussion (adding Malcolm, who I think is on the same wavelength with us), saying:

“Ivor, please, say something, because these people are facing an impossible task of ‘squaring the circle’ of a set of Maxwell’s laws into something coherent – but the reason why it is impossible is that no one actually knows exactly what Maxwell meant by that list of laws dressed into fairly sophisticated mathematics. So, Ivor, the same fate may be with you, unless you say something, in some 20 years from now {well, it looks like I miscalculated by 5 years from your estimate of 2045!} no one will know exactly what Catt meant by his energy current”.

Then some invisible force pushed me to discard that email! And now, I had an evening walk to my office to freshen up my mind, and here I see your email.

Sadly, people, don’t listen and can’t liberate themselves from the heavy chains of those (partial) laws – which are like, indeed, those separate gods of Greeks or Romans being responsible for one aspect of life or another or one phenomenon in nature or another. They can’t understand that the Occam’s Razor of nature wouldn’t tolerate having so many (purportedly, fundamental) relationships, with lots of tautology in them. All those relationships, taken individually, are contrapuntal and superposing.

People can’t understand that there is no need for stationary fields, no need for separate treatment of charged particles etc. Everything comes naturally as a result of energy current trapped in sections of space, where it continues to move.

I think the next big leap where Catt’s vision will show its power will happen in high-speed computing and a massively parallel scale – truly high-speed! Until then we will probably fight against the Windmills of stale minds and deaf ears …

This confused discussion between David Tombe and Akinbo (not sure if Malcolm has seen that) is an illustration of the fact that behind the mathematically elegant façade of Maxwell’s laws there is a massive mess of physical concepts, a cacophony of man-made and contrived ‘pagan-like’ beliefs and disbeliefs (e.g., did Maxwell mean that or not?), which may work in special cases. The fact that these beliefs can work in special cases of success in growing crops or hunting/herding animals in some regions of the world, or in a more modern terms, wiring up a Victorian mansion and sending data to Mars rovers. But how they are going to succeed in the future when needs Terabit/s  data rates or picosecond latency in accessing storage, nobody knows. I am less inclined in dividing people into true scientists, careerists or other categories. Historical materialism (which we had to study back in the USSR) gave pretty good explanation of all kinds of folk under the sun. Nobody is saint here. It’s just a personal comfort matter …

To Akinbo Ojo (in reply to his email attached below):

Just combine ∇2E = µε(∂2E/∂t2) or ∇2H = µε(∂2H/∂t2)  into one eqn, replacing E and H with ExH, and you’ll have the Heaviside signal (aka energy current) propagating in space with speed of light in the epsilon/mu medium, and that’s all what is needed by Catt’s theory, and that what fills up fragments of space, in order to form transmission lines of particular Z0, capacitors, inductors, elementary particles, etc.

Everything in the world is filled up with this energy current, and any such an entrapment of energy current turns sections of space into elements of matter (or mass)!

From: Akinbo Ojo <taojo@hotmail.com>
Sent: 14 January 2020 14:20
Subject: Re: Displacement Current in Deep Space for Starlight

Hi David,

I didn’t say there was an error. I said given the Ampere and Faraday equations when you follow the curls and substitutions you will confront something that would be unpalatable to you and which you must swallow before you can get ∇2E = µε(∂2E/∂t2) or ∇2H = µε(∂2H/∂t2).

Regards,

Akinbo

Static vs Dynamic when referring to the electric field in capacitor

I wrote in my paper “Energy current and computing” (https://royalsocietypublishing.org/doi/10.1098/rsta.2017.0449 ):

“there is no such a thing as a static electric field in a capacitor. In other words, a capacitor is a form of TL in which a TEM wave moves with a single fixed velocity, which is the speed of light in the medium”.

This statement causes some controversy – Ivor Catt refers to it as “heresy”.

Here I would like to explain what is meant here by static/dynamic:

One of the important aspects of considering the distinction between ‘static’ and ‘dynamic’ is that of what we mean by dynamic/static in the first place.

I think that the notion of dynamic/static, first of all, concerns as to whether a particular value (say, electric field intensity E) changes in time or not, i.e. whether dE/dt is non-zero or not. Another notion of dynamic/static is about the movement of the value in space (and, necessarily in time because movement in space cannot be instantaneous!), so if we talk about the electric field E, we can be talking about dE/dx being non-zero, and here is the critical notion of the link between dE/dt and dE/dx, which MUST be mediated by dx/dt (speed of light in the medium!). The latter MUST BE ALREADY SET UP, ab initio, and that’s what Ivor Catt’s Heaviside signal is about. So, even if we have an impression that something is static – like electric field in a fully charged or fully discharged capacitor, this impression will be viewed in the form of contrapuntal dE/dt=0, we somehow need to retain the notion of c=dx/dt being constant and non-zero. But then the immediate question arises of: what is there that is moving in a longitudal direction at speed c? And the answer is the Heaviside signal! What else? So, my understanding is that THIS MOVING THING is what makes me state that that there is no such a thing as a static electric field in a capacitor!

“Contrapuntal superposition” of Heaviside signals unravelled as a lookalike state coding problem in asynchronous circuit design

This article http://www.ivorcatt.co.uk/x267.pdf by Ivor Catt – published (now more than) 40 years ago – proposed looking at transverse electromagnetic (TEM) wave by means of the so-called Heaviside signal. Heaviside signal is basically EM “energy current”, described by Poynting vector ExH (E and H are electric and magnetic field intensities, respectively), that travels and can only travel in space with a speed of light in the medium, fully determined by its fundamental parameters permittivity (epsilon) and permeability (mu) – i.e., c=1/sqrt(mu*epsilon). The key point here, I should again stress, is that ExH cannot stand still – it can only travel with speed of light. One might ask, where does it travel? It travels where the environment – i.e the combination of materials – leads it to, and in practice it predominantly goes where the effective impedance of the medium is smaller. The effective or characteristic impedance of the medium, Z0, is also fully determined by the permittivity (epsilon) and permeability (mu), i.e. Z0=sqrt(mu/epsilon). Moreover, Z0=E/H – this is sometimes called the constant of proportionality of the medium.

Why is this look at the TEM wave more advantageous than some other looks, such as for example, the so called “rolling wave” of the alternating concentrations of magnetic energy 1/2*mu*H^2 and electric energy 1/2*epsilon*E^2 in the direction of propagation? As Catt shows in the above article, this more conventional way is actually meta-physical, because it is based on the assumption of causality between the electric field and magnetic field and vice versa. The latter is a form of tautology because it creates a non-physical, but rather, mathematical or equation-based “feedback mechanism”, which does not make sense in physics.

Another important issue that calls for the use of Heaviside signal is that it retains the notion of the travelling EM “ExH slab” in each direction where it can travel, and hence its change-inducing geometric causality between points in space. As exemplified by the effects of travelling TEM waves in transmission lines (TLs), this look, for example, naturally separates the incident wave from the reflected (of the interface with another medium) wave, or from another wave that may travel in the opposite direction. As a result, the analysis of the behaviour of the TL becomes fuller and can explain the phenomena such as superposition of independent waves in cases such as cross-talk between TLs. Here is another paper by Ivor Catt – published more than 50 years ago – http://www.ivorcatt.co.uk/x147.pdf and subsequent clarifications – http://www.ivorcatt.co.uk/x0305.htm of the superposition of the even and odd modes (modes of TEM travelling with different speeds of light in the medium due to different epsilon and mu conditions arising between adjacent pairs of metal lines).

As shown in these papers, the view provided by the conventional theory is necessarily contrapuntal – it looks at the combined EM field in every point in space and in time. As a result it simply overlays the travelling ExH signals. And that’s what one can see by measuring voltage and current in points of interest on the TL. Or, equally, what one could see on the oscilloscope’s waveforms at points in space. Interestingly that looking at the same time at a number of points, in a spatially orderly way, leads to a conjecture that there is an interplay of several travelling TEM waves, but the conventional rolling wave approach would not explain the physics behind them properly!

What is remarkable in this for me is that this reminds me the difference between two types of models in asynchronous control circuits and how one of them obscures the information revealed by the other. One type of model that is based on recording purely binary encoded states of the circuit (akin to the contrapuntal notion). The other is based on a truly causal model (say Signal Transition Graph – or STG – called Signal Graph or Signal Petri Net in my early publications: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/LR-AY-TPN85.pdf or https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/AY-AP-PN90.pdf), where we have the explicit control flow of signal transitions or events running in the circuit. The difference between these two looks is often manifested in the so-called Complete State Coding problem (cf https://www.researchgate.net/publication/2951782_Detecting_State_Coding_Conflicts_in_STGs ). If we only look at the contrapuntal notion of the state without knowing the pre-history of the event order we cannot distinguish the semantically different states that map onto the same binary code provided by the signals. To distinguish between such states one needs additional information or memory that should be either provided in the underlying event-based model (the marking of the STG) or by introducing additional (aka internal or invisible) signals (in the process of solving the CSC problem).

I am not claiming that the above-noted analogy leads to a fundamental phenomenon, but it reflects the important epistemic aspect of modelling physical world so that important relationships and knowledge are retained, yet in a minimalist (cf. Occam’s razor) way. Some more investigation into this analogy is needed.

Clearing my way through quantum entanglement – things are actually rather trivial …

The end of year 2019 was marked for me by a sudden revelation about the entanglement (aka EPR) “paradox”. Here is my confession, first. For a long time I had been thinking that the “superposition or entanglement paradox” consisted in the following:

Two particles (possessing a certain probabilistic characteristic such as a spin) originating from the same source were entangled, i.e. connected by, say, one being in phase alpha, while the other one in opposite phase (180-alpha). Then the particles were be sent into different directions of travel and remained entangled and what’s important their alpha parameter would still be unknown. Then at some point in time and space one of the particles would have been measured, and say found to be equal to 1 (I suppose we can use binary encoding without loss of generality). Now, here comes my misinterpretation: Then at the very same moment in time the other particle would be disentangled and its value would be exactly opposite, i.e. equal to 0. Therefore the paradox (in my interpretation) was as follows: (1) the state resolution in time is simultaneous for both particles, i.e. while forcing the measurement of the first particle we immediately have the measurement of the second particle; and (2) the state resolution on terms of value of one particle would completely determine that of the value of the second particle.

My problem was not in understanding why (2) was true. That was quite clear to me for a long time, especially after my discussions with experts like Professor Werner Hofer, who explained to me, a non-expert on quantum theory, that both particles, once entangled, would retain their phases and in that respect would remain information-wise connected. My problem was in accepting and understanding (1), which I my view violated temporal causality and no action at distance. I could not accept the fact that there would be no delay between the initiated measurement of the first particle and simultaneous resolution of the second particle. The reason for my conundrum was that I am a firm believer in causality of related events in time, and I could not accept (1).

But, thanks again to good old Werner (!), with whom we talked a few days before Christmas 2019, I realised that there is actually no paradox at all here. What actually happens is that – there is no issue (1) involved! The resolution of the second particle in fact can happen concurrently or independently of that of particle one! And the whole pathos of the EPR was only in part (2). This gave me an enormous relief and peace of mind needed for the coming festive season. The expected asynchrony and delay-insensitivity of the physical world had been restored! And, as far as part (2) is concerned, that was a trivial thing to me – this is purely a combinatorial (non-sequential in terms of automata) issue of one value being statically opposite to the other value – what’s the big deal !?

Now, what annoyed me in all this conundrum, well, obviously my own naivety and my inaccurate reading about the EPR paradox. On the other hand, I think that the lack of clarity in separating the issues of timing from value, i.e. when and what, that is quite symptomatic of the 20th century mathematical physics, involving complex quantum mechanical constructions, is what makes engineering-minded people like me – who expect both these issues to be properly addressed – confused and misled!

Happy days!

Correction on my previous blog and some interesting implications …

Andrey Mokhov spotted that to satisfy the actual inverse Pythagorean we need to have alpha=1/2 rather than 2. That’s right. Indeed, what happens is that if we have alpha = 1/2 we would have (1/a)^2=(1/a1)^2+(1/a2)^2. This is what the inverse Pythagorean requires. In that case, for instance if a1=a2=2, then a must be sqrt(2). So the ratio between the individual decay a1=a2 and the collective decay is sqrt(2). For our stack decay under alpha = 2, we would have for a1=a2=2, a=1/2, so the ratio between individual decay and collective decay is 4.

It’s actually quite interesting to look at these relations a bit deeper, and see how the “Pythagorean” (geometric) relationship evolves as we change alpha from something like alpha<=1/2 to alpha>=2.

If we take alpha going to 2 and above, we have the effect of much slower collective decay than 4x compared to the individual decay. Physically this corresponds to the situation when the delay of an inverter in the ring becomes strongly inversely proportional to voltage. Geometrically, this is like contracting the height of the triangle in which sides go further apart than 90 degrees – say the triangle is isosceles for simplicity, and say its angle is say 100 degrees.

The case of alpha = 1/2 corresponds to the case where delay is proportional to the square root of Voltage, and here the stack makes the decay rate to follow the inverse Pythagorean! So this is the case of a triangle with sides being at 90 degrees.

But if alpha goes below 1/2, we have the  effect of the collective decay being closer to individual decays, and geometrically the height of the triangle where sides close up to less than 90 degrees!

Incidentally, Andrey Mokhov suggested we may consider a different physical interpretation for inverse Pythagorean. Instead of looking at lengths a, b and h, one can consider volumes Va, Vb and Vh of 4-D cubes with such side lengths. Then these volumes would relate exactly as in our case of alpha=2, i.e. 1/sqrt (Vh)=1/sqrt(Va)+1/sqrt(Vb).

Cool!


Charge decay in a stack of two digital circuits follows inverse Pythagorean Law!

My last blog about my talk at HDT 2019 on Stacking Asynchronous Circuits contained a link to my slides. I recommend you having a particular look at slide #21. It talks about an interesting fact that a series (stack) discharge rate follows the law of the inverse Pythagorean!

It looks like mother nature caters for a geometric law of the most economic common between two individual sides.

My Talk on Stacked Asynchronous Circuits at HDT 2019

I just attended a Second Workshop on Hardware Design Theory, held in Budapest, collocated with 33rd International Symposium on Distributed Computing http://www.disc-conference.org/wp/disc2019/

The HDT’19 workshop was organised by Moti Medina and Andrey Mokhov. It had a number of invited talks, and here is the programme: https://sites.google.com/view/motimedina/hdt-2019

I gave a talk on Stacked Asynchronous circuits.

Here is the abstract: In this talk we will look at digital circuits from the viewpoint of electrical circuit theory, i.e. as loads to power sources. Such circuits, especially when they are asynchronous can be seen as voltage controlled oscillators. Their switching behaviour, including their operating frequency is modulated by the supply voltage. Interestingly, in the reverse direction if they are driven by external event sources, their switching frequency determines their inherent impedance which itself makes them ideal potentiometers or voltage dividers. Such circuits can be stacked like non-linear resistors in series and parallel, and lend themselves to interesting theoretical and practical results, such as RC circuits with hyperbolic capacitor discharges and designs of dynamic frequency mirrors.

Here is the PDF of my slides: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/stacked-async-budapest-2019.171019.pdf

questions about energy current (ExH slab)

Mac Rynkiewicz from Australia, who has been exploring the problem of the existence of aether, and recommended to me to read papers by Vladimir Demjanov, a Russian physicist from Novorrossisk, has posed the following interesting questions:

(1) How exactly does an E by H slab (or em radiation or photons or photaenos) manage to follow say copper, including bends etc.

(2) How exactly does a change in impedance (or a termination) give partial (or full) reflexion.

(3) How is a good conductor a good obstructor? (there is a contradiction deep inside that, especially re heat losses). 

Here is my answer sent to Mac today:

My view:

Imagine a messy fall of water – like rain falling in all directions (something like we see around Niagara Falls). That’s energy current ExH. The drops of rain can happily move where they can easily penetrate. In EM terms this means very low epsilon and mu. If you put some relatively porous material (i.e. epsilon still being lowish but not very low) around, the drops will still relatively easily penetrate but partly reflect back. Imagine you put a gutter made of hard water proof material (i.e. very high epsilon). Drops will start concentrating near and along the gutter because they can’t penetrate the gutter’s material. Most likely the density of rain around the gutter will be much higher than further away from the gutter.

Let’s now turn to Mac’s questions:

1) How exactly does an E by H slab (or em radiation or photons or photaenos) manage to follow say copper, including bends etc.

In the view of the above model, ExH slab will follow copper as its impenetrable gutter.

(2) How exactly does a change in impedance (or a termination) give partial (or full) reflexion.

In the view of the above model, change in impedance will give partial or full reflection. Low impedance means high epsilon and no penetration but following the gutter. Termination (high impedance) means no more good gutter to follow, hard to disperse near the terminator, reflect the energy stream back and inscrease pressure on the gutter (e.g. higher E). If the impedance is low (higher epsilon), part of the stream is reflected part is guttered.

(3) How is a good conductor a good obstructor? (there is a contradiction deep inside that, especially re heat losses). 

In the view of the above model, we should actually “reverse” (cf. “We reverse this …” as per Heaviside) the terminology when we move from electric current to energy current. From the energy current point of view we must call highly permitting material (high epsilon) low permitting material (like copper). Sponge is should have higher permittivity than copper. Copper is a gutter. Every time part of the rain that hits the gutter it loses energy. When it hits the sponge it still penetrates.

What we have in EM now is a mess. Everything is defined from the point of view of imaginary ‘electric current’ being seen as a promoter of energy propagation. The notion of materials with respect to their names like permittivity is made to serve the  imaginary world rather than material world.

So, you are quite right:  here is a contradiction deep inside that

 

 

Looking at various ‘paradoxes’ from the energy current standpoint

There are many puzzling questions around the relationship between Maxwell’s equations and elementary particles. Many of them form unresolved paradoxes. Perhaps, physicists can explain them in one or another way, but these explanations are often fairly complex and difficult to understand by a more practically minded person.

For example:

I saw the following paragraph in the paper “A derivation of Maxwell’s equations using the Heaviside notation” by Damian P. Hampshire in 376, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences

https://doi.org/10.1098/rsta.2017.0447

“We suggest investigating an Einstein–Podolsky–Rosen experiment [17]. Typically, an entangled electron–positron pair is mixed and prepared as a superposition of states with equal and opposite magnetic moments (or spins). The charges are separated and the magnetic moment or spin of the electron is measured. The well-known instantaneous collapse of the wavefunction occurs, so that the positron ends up with the opposite magnetic moment (or spin) to the electron. The appearance of the moment of the positron is triggered by entirely quantum mechanical effects—no direct electromagnetic communication occurs between the electron and positron. Indeed, one can think about the two charges as a single entity. However, one can argue that we do not really know how the information that leads to the positron producing a magnetic moment of opposite sign is instantaneously received— beyond asserting it is part of the fabric of quantum mechanics, or part of the nature of a macroscopic wavefunction. We suggest that while the moment of the positron is being created (rather than excited), the production of the B-field associated with its magnetic moment may not be coupled to the production of any E-field at all. So, one could measure (∂E/∂t)r and (∂B/∂t)r in the wavefront of the positron, hoping to find B-fields with E-fields that are inconsistent with
Maxwell’s equations.”

I hypothesise that if we consider the (only!) postulate of Energy current ExH captured in finite space – pretty much as we do it in a section of Tx line, we should be able to explain the above entanglement of electron-positron as a superposition of states. My previous blogs about Wakefield 4 experiment exactly talk about the effects of this vein. We trap EM energy into a cap (Tx line section) and then short-circuit it. We have the effect of the “two-faced Janus” – where the same object switches between electron and positron states.

 

 

A two-faced Janus, this electron!

Let’s think a bit more about the Wakefield 4 experiment.

We have already observed how it can be that in some points of the same Tx like we experience different behaviour of essentially the same object.

The point near the shortened (ON state) switch has 0V potential, so basically the E component of the Poynting vector ExH is destroyed here, but we have a full H component all the time here.

In point B – in the middle of the Tx line, we have a full swing oscillation between +7V and -7V with equal intervals of time being in each of these two states. So here we don’t have H component active but only full E component swapping its positive state to negative and back.

In points A and C we have a mix of both E and H dominated intervals.

What sort of conclusions we may make out of this?

Well, one of such conclusions is that a Tx line, first charged and then short-circuited, is something that can appear in its different spatial sides, either like a swinging capacitor (only E field is visible), or inductor (only H field visible), or a bit of both – a time-division multiplexed cap-inductor.

My hypothesis is now, is electron a tiny Tx line that behaves like that two-faced Janus?

Why not?! The entire world, as I used to hypothesise in my “Energy current and computing” paper is discretised or granulated into Tx lines where we have substances (or lack of them)  with characteristic epsilon and mu, and these “cocktails of epsilon and mu of particular values” form our matter, and behave in EM fields accordingly, turning to us with their points A, B, C etc … depending where we touch them with our instruments!