Static fields are an illusion …

To my previous blog, proving that EM power can only exist in motion with a speed of light, one might react with a question: What about Static EM fields?

(cf. the Static Fields rubric on the wiki page about Poynting Vector: https://en.wikipedia.org/wiki/Poynting_vector)

The corollary of the proposition proven earlier is that there is NO static fields per se.

Of course we need to say what we mean by ‘static’ here. Well static means – Not moving! A common online English dictionary defines static (adjective) as follows: lacking in movement, action, or change, especially in an undesirable or uninteresting way.

So, I then have the full right to surmise that Static fields do not move with speed of light according to this definition. So, there is a contradiction with the proof. Therefore, the only way to resolve it is to conclude that Static Fields DO NOT have the right to exist!

Indeed, what is believed to be static is actually a superposition or contrapuntal effect of normally moving fields (Poynting vectors to be precise), where their stepping or pulsing effects are not visible. A normal illusion due to superposition.

One might ask but what about for example a cylindrical capacitor shown on //en.wikipedia.org/wiki/Poynting_vector ?

The answer is that – just the same thing – the are at least two power flows of ExH form there – like two conveyor belts of sheaths moving against one another, where the H (magnetic components are superposed and show the cumulative effect of H=0). Just short-circuit this cylinder from at least one edge, and you will see the effect of transition (redistribution) of the magnitudes of E and H so that the total amount of power ExH crossing the spatial cross-section will remain the same.

So Static Field (as being static in the sense of the above definition) is an illusion – just another H G Wells’ Invisible Man visiting us!

On the Necessity and Sufficiency of Poynting vector’s motion with speed of light …

On the Necessity and Sufficiency of Poynting vector’s motion with speed of light for the existence of contrapuntal states observed in Wakefield experiments

(see my earlier post: https://blogs.ncl.ac.uk/alexyakovlev/2019/09/14/wakefield-4-experiment-causal-picture-in-energy-current/ and Ivor Catt’s original paper on Wakefield 1: http://www.ivorcatt.co.uk/x343.pdf)

Alex Yakovlev

13 August 2020

The main hypothesis is:

H: EM energy current in the form of ExH (aka Poynting vector) can only exist in motion with a speed of light.

Experiment:

Consider a Wakefield experiment with a Tx Line that is initially discharged.

At time t=0, the TL is connected at point A (left-hand side) to a source 10V, where it is terminated with an open circuit. Point B is in the middle. Point C is at the right-hand side and is short-circuited.

Wakefield shows that:

At point A we have a square shape oscillation between +10V (half-time) and -10V (half-time).

At point C we see no changes – completely discharged line at 0V.

At point B we have the following cyclically repeated sequence of phases: (a) 0V (quarter time), (b) +10 (quarter time), (c) 0V (quarter time), (d) -10V (quarter time).

A similar analysis can be carried out with an initially charged TL which is short-circuited at point A and is open-circuited at point C.

Experimental fact:

W: We observe contrapuntal effects in Wakefield, such as in Point B we have phases (a) and (c) where the cumulative effect of ExH field waves makes them look observationally equivalent – at 0V, yet leading to different subsequent behaviour, i.e. from (a) it goes to (b), and from (c) it goes to (d).

The proposition:

P: The contrapuntal effects that we observe in Wakefield hold if and only if ExH can only exist in motion with a speed of light.

In other words, we state that W is true if and only if H holds, i.e. H is a necessary and sufficient condition for W.

Proof:

Sufficiency (H->W):

Suppose H is true. We can then easily deduce that at every point in space A, B and C, the the observed waveform will be as demonstrated by Wakefield.

(Ivor’s website contains my prediction for Wakefield 3 with contrapuntal behaviour – the analysis was based on Ivor’s theory – i.e. hypothesis H, and it was correctly confirmed by the experiment. For details see: http://www.ivorcatt.co.uk/x91cw34.htm and http://www.ivorcatt.co.uk/x842short.pdf)

Necessity (W->H, which is equivalent to not H -> not W):

Suppose H does not hold, i.e. at some point in space and/or in time, ExH is stationary or does not travel with speed of light. Let’s first look, say at point C. We see a “discharged state” – it corresponds to what we may call stationary state electric field, i.e. E=0 – a discharged piece of TL. Here we can possibly say that the voltage across it is constantly equal to 0 because at C it is short-circuited.

Next, we look at point B at the time when the voltage level is equal to 0V, say in phase (c). We think it is a static E=0. Using the same argument as we did for point C. One might argue that the point B is not short-circuited, but this does not matter from the point of view of our observation – it’s just 0V.

How can we predict that after a specific and well-defined time interval, voltage at B will go down to -10V and not up to +10V as it would have gone had we been in phase (a)? In other words, how can we distinguish the states in those two phases using classical theory, where phase (a) is observationally equivalent to phase (c).

The only way we could predict the real behaviour in W with classical theory if we had some ADDITIONAL memory that would store information, in another object, that although we were stationary here in that place and time interval, we were actually being in transit between phases (b) and (d) rather than being in transit between (d) and (b).

The fact that we need ADDITIONAL memory (another TL) is something that is outside the scope of our original model, because we did not have it organised in the first place. So, there is no knowledge in the original model that will make us certain that from phase (c) we will eventually and deterministically go to phase (d).

Q.E.D.

Note: The above fact of having phases (a), (b), (c) and (d) is the result of the contrapuntal effect of the superposition of the partial actions performed by the steps moving in the right and left directions. And unless that motion was always (in time and in space) with a well-defined speed (speed of light), we would not be able to predict that from phase (c) we will definitely and only transition to phase (d) and not to phase (b) and how quickly that transition will happen. The case of a fully charged or fully discharged capacitor, with seemingly stationary E field, that is a contrapuntal effect of superposed motion of ExH in all directions, is just a special case of the TL.

Remark from David Walton:

The only way we could predict the real behaviour in W with classical theory if we had some ADDITIONAL memory that would store information, in another object, that although we were stationary here in that place and time interval, we were actually being in transit between phases (b) and (d) rather than being in transit between (d) and (b).

is the key point.  

Another way to state the same thing in  different context and less formally (I think) is to point out that when two pulses travelling in opposite directions pass through each other either the B or E fields will cancel, hence demonstrating that the field cannot be the cause of the onward propagation of the em pulse.

My response:

That’s a great point you make. Indeed the absence of either B or E in the contrapuntal state disables us from the ability to talk about further propagation of the pulses.
Yes, the key point is the absence of memory about the dynamical process in the classical field model.

In summary:

Illusions … How many we have every day because we don’t really know they are happening around us (not enough sensors or memory to track things).
The contrapuntal effects are those that H G Wells probably had in mind in the shape of the Invisible Man.  They blind us from reality …

The real sense of energy conservation law is in permanent and omnipresent motion of energy

In my email exchange with Ivor Catt, a following idea came to my mind.

The law of energy conservation as it is being presented to students and understood is rather abstract as it begs for many interpretations, because energy exists in its permanent and omnipresent motion. Even if it is trapped in a fragment of space like a capacitor or an elementary particle it is in motion. 


So, what seems to be less convoluted is the law that energy can only exist in motion and it can only move at speed of light. That’s actually what conservation of energy is. This is true by Occam’s razor principle and does not need to be proven. So, it is necessarily so before or after the switch [between voltage source and a capacitor] is closed … and without this law we would not have had those prefect contrapuntal effects, including those that ’cause’ people to think we have stationary conditions in capacitors and transmission lines.

80th Anniversary of late Professor David Kinniment and my lecture on Research Leadership for Iraqi Researchers

Yesterday, 10th July, was a special day in the calendar – we celebrated the 80th Anniversary of late Professor David Kinniment. David was my closest mentor at Newcastle when I arrived here in 1991.

He was a pioneer of research in metastability , arbitration and
synchronization as well as VLSI design led Microelectronic Systems Design group at Newcastle for 20 years.

We generated many ideas for projects, PhD research, papers, design tools, conference and industrial presentations. Above all, we just enjoyed spending time in discussions about science, culture and genealogy. David and his wife Anne welcomed on many occasions the whole Newcastle MSD team in their wonderful Sike View house in Kirkwhelpington in the middle of Northumberland.

By lovely coincidence it wouldn’t have been a better occasion yesterday that I was kindly invited to give a lecture “Becoming a Researcher: from Follower to Leader” to the wonderful 100+ audience of Iraqi researchers – the invitation came from my PhD alumni Dr Ammar J M Karkar, Professor and Director of IT Research and Development at University of Kufa, Iraq.

The lecture is now available on YouTube https://youtu.be/JnfObxmTslc

All the best!

Static vs Dynamic and Charges vs Fields

There is a constant debate in Electromagnetism between the Charge-based views and Field-based views. I am of course over-simplifying the picture here, at least terminologically. But the main point is that you can talk about EM either from the point of view of; (i) objects that have mass, like electrons, protons, ions etc – I called them collectively charges or charge carriers, or (ii) entities that carry EM energy, like strength of electric and magnetic field, Poyinting vector etc – those are not associated with mass. Both views are often linked to some form of motion, or dynamics. For the world of objects people talk about moving charges, electric current, static charges etc. For the world of fields, people talk about EM waves, TE, TM and TEM, energy current, static field etc.

Often people talk about a mix of both views, and that’s where many paradoxes and contradictions happen. For example, there is an interesting ‘puzzle’ that has been posed to the world by Ivor Catt. It is sometimes called Catt’s question or Catt Anomaly.

http://www.electromagnetism.demon.co.uk/cattq.htm

Basically, the question is about: when a step in voltage is transmitted in a transmission line from a source to the end, according to the classical EM theory charge appears on both wires (+ on the leading wire, and – on the grounded wire): Where does this new charge come from?

Surprisingly, there has not been a convincing answer from anyone that would not violate one or another aspect of the classical EM theory.

Similar to this, there is a challenge posed by Malcolm Davidson, called Heaviside Challenge https://www.oliver-heaviside.com/ that hasn’t also been given a consistent response even though the challenge has been posed with a 5 thousand USD prize!

So it seems that there is a fundamental problem in reconciling the two worlds, in a consistent theory based on physical principles and laws, rather than mathematical abstractions.

However, there is a hope that with the way to understand and explain EM phenomena, especially in high-speed electronic circuits, is through the notion of a Heaviside signal and the principle of energy-current (Poyinting vector) that never ceases from travelling with the speed of light in the medium. In terms of energy current perfect dielectrics are perfect conductors of energy, whereas perfect charge conductors are perfect insulators for EM energy current.

So, while those who prefer the charge based view of the world may continue to talk about static and dynamic charges, those who see the world via energy current live in the world where there is no such a thing as static electric or magnetic field, because TEM signal can only exist in motion with a speed of light in the medium. Medium is characterised by its permittivity and permissibility and gives rise to two principal parameters – speed of light and characteristic impedance. The inherent necessity of the TEM signal to move is stipulated by Galileo/Newton’s principles of geometric proportionality, which effectively define the relations between any change of the field parameter in time with its change in space. Those two changes are linked fundamentally, hence we have the coefficient of proportionality delta_x/delta_t, also known as speed of light, which gives rise to causality between the propagation of energy or information and momenta of force acting on objects with mass.

Another consequence of the ever-moving energy current is its ability to be trapped in a segment of space, pretty much what we can have in a so called capacitor, and thus form an energized fragment of space, that gives rise to an object with mass, e.g. a charged particle such as an electron. So, this corollary of the first principle of energy current paves the way to the view of EM that is based on charged particles.

Which ‘sect of thinkers’ do I belong to?

How is modern science different from what was in the land of Israel 2000 years ago? 

The four main sects of (then religious) thinkers were:

Sadducees – conformists to the Greco-Roman rulers

Pharisees – purists and devotees to the established canon

Essenes – ‘holy’ ones waiting for Messiah

Zealots – radical and militant ones

There were also Scribes, but they were a sort of what Ivor Catt calls Parrots and they weren’t influential – they were often closer either to Pharisees or Sadducees.


An interesting self-test is to think which one (or none, or several) of them each one of us belongs.

Addressing COVID19 gaps between data, models and decision brings us back to hierarchical PID control, folks!

I’d like to comment about the way how careful we should be when we use data (even if it’s accurate at the source and at its processing steps!), when we build models to extract some dependencies between elements of the data, and ultimately when we make decisions.

Long time ago (approx. 40 years), when my father took over as head of control engineering department in St. Petersburg electrical engineering institute (LETI) from the previous head, Professor Alexander Vavilov, at their school they were excited by exploring the idea of evolutionary synthesis of control systems. One crucial part of this study was the development of theory of structural synthesis, where models of the system at each level of granularity had to be adequate to the criteria of optimal control. (By the way, graphs were essential in those models)

The basic idea was that depending on the level of granularity (or hierarchy) considered by the modeller, the system can have completely different criteria of correctness and/or optimality, hence certain aspects that are significant at small scale may not be important at a larger scale.
A bit like the criteria of control in the national level is not the same as criteria for control at the municipal level, and not the same at the level of local community, and not the same at the level of family units and individual households.
So, because of these differences and clashes of interest between different levels there is a lot of anxiety and misunderstanding in societies.

So, what the relationship with COVID-19?

Well the relationship is direct.

Let’s take the data on Mortality 2017 from the UK National Statistics: https://www.ons.gov.uk/visualisations/dvc509/chart1/index.html

This data shows that the number of deaths across the country in one year is significant – hundreds of thousands – not far from 1M. The relative number of deaths, that we witness now as a result of COVID-19 even if it will hit 10K-20K would be quite small though.

So, we clearly have different perspectives here, one is national (spatial) that stretches across the whole year (temporal), while the other could be local (e.g. an area of population in London) and taken during these 2-3 weeks of March-April. The relative increase in the number of deaths at the national scale is a small bump on the curve. I.e., integrating the number of deaths, caused by respiratory problems thanks to COVID-19, at the national scale will not give much effect to the game of the totals.

However, if we look spatiotemporally at the small scale we may see a significant rise in terms of differential and even proportional response. So, if we are particularly sensitive to these two aspects, differential and proportional, we may actually decide to react with a powerful action.

What we are facing here is exactly what I started with in my blog. We are facing with the different levels of granularity (or hierarchy, whatever we call it). Consider the coarse granularity. From this point of view our Mother Nature in us may say, well, why bother, the integral response (let’s denote it by letter I) is very small, and we look at time intervals of decades, so there is no need for any great change in decision-making. The problems of environmental nature are much more serious.

But let’s go down to the level of individuals, especially those living in the most affected areas of COVID-19. Again, our Mother Nature in us would tell us, that the rise in deaths due to coronavirus is an alarm, it may trigger a disaster, we may lose the loved ones, lose a job and income. What’s happening here? It’s actually that at the lower granularity level, the criteria for decision-making are based on differential and proportional responses (let’s denote them D and P). So, in mathematical terms at different levels of granularity we apply different coefficients, or what engineers call gains, to these aspects P, I and D, and form our decisions according to those gains or criteria of importance.

So, ultimately it is vital that the data we use, and the models which characterise this data in time and in space, where we calculate partial or full derivatives and integrate in space and time, or proportionalise in space and time, must be adequate to the criteria of significance we apply, and lead to corresponding decision-making at the appropriate level.

No doubt, the nations that are harmoniously hierarchical and fractally uniform, may have less problems in matching criteria of optimality with the P, I and D responses brought be the models from the actual data.

Yet, again we face that PID-control seems to rule the world we live in!

COVID-19 – Why China Did What it Did

From the horse’s mouth. Received this morning from a Chinese  source who is a top class engineering expert.

Very revealing!

Some of the actions of the Chinese government, which seemed counter-intuitive at the time, became quite clear from this explanation.

  1. How the hell did they decide to close up Wuhan when the official death figure was only 30 something? 
    Remember that the city is a uniquely important communications hub with air, rail and river transport crossing in multiple directions (in a war they’d probably prioritize bombing the place). The time was just before the Spring Festival before the annual spring travel crush started. Closing Wuhan spoils the SF(CNY) for a huge number of people, hurts the feelings of even more and damages the economy significantly. The modelling teams were assembled much earlier than this date and this action was significantly model-driven. The models tested different actions and the actual sequence was chosen as the least bad one. Closing Wuhan on its own looked stupid to some degree, but not as the first of the sequence of actions that followed:
  2. What about the rest of the country then?
    The rest of the country was allowed to continue through the first phase of the spring travel rush, which decanted probably 1/3 of the population from large cities onto the countryside, then the entire country was closed down preventing their return. This prevented the appearance of another Wuhan, with which the government would have no way of dealing.
  3. Volunteering albeit under peer pressure is a key
    As it happened, they were able to assemble large teams of medics from elsewhere in the country (the so-called volunteers – if you were a party member not volunteering was not an option, and non-members esp. low ranking nurses had incentives such as conversion from contract worker to full-time permanent worker) to descend on Wuhan and its province Hubei en masse. This depletion of medical strengths elsewhere proved sustainable because another flareup never happened. The President did not formally thank the people of Wuhan on behalf of the nation for nothing. When the people of China hear western media portray this as an apology for government errors they find this play quite difficult to imagine/understand. The hard/cold decision was to contain the spread locally from the first and therefore those local people had to suffer more hardships without volunteering. The least the nation could do is to appreciate this.
  4. Fangcang – makeshift hospitals are effective
    The establishment of the fangcang (makeshift hospitals using stadiums and exhibition centres) seemed strange, given that you were assembling ‘suspected cases’ all in one single space. The models predicted success which was borne out by reality. This has to do with how you want to deal with suspected cases and confirmed cases with light symptoms. It was determined that these people are better assembled together under professional care and control than remain at home to self-isolate with family. Fangcang-induced infections turned out to be negligible, almost zero. With beds a few metres from each other and everyone breathing the same air how was this possible? The answers are in the obligatory wearing of masks, on-hand medical and professional help and admin and enforced discipline, and almost continuous cleaning of the environment. These put together turned out to be vastly preferable, so far as the numbers are concerned, to home isolation where people do it any amateur manner they like/can.
  5. Testing methods with replication are crucial (real engineers can appreciate the use of time redundancy and diversity)
    The testing method adopted has practically 100% accuracy in the lab, close enough to 100% to be dependable for a tested population where the infection rate is only 1%, but in the field negative results were not trust-worthy (positives are completely fine). This was also put into the models and the resulting standard changes converted a large number of suspects to confirmed in a single day (all such converted cases had negative test results, but did not pass a CT scan test). The scientists read the UK’s confident reporting of how many tested with a large proportion of negatives with fascination, and speculate that the UK may have a more reliable testing procedure. This testing situation also inspired the fangcang approach as well as the very tight lockdown measures taken across the country. You don’t get cleared just because you had a negative. You need 2-3 negatives in a row without symptoms. In other words, treat everyone as a suspect case and everyone with symptoms as a confirmed case and design your control measures based on this assumption. The CCP is able to do this, other countries maybe not.
  6. Modelling approaches, also diverse and competing, are a must.
    The modelling gravitated towards two competing camps, by design of the government organizers. One is called the maths model and the other the medicine model. The first is led by system theorists and the second, epidemiologists. The commonly seen model of first order differential equation with an R0 factor is nowhere to be seen in either groups of models actually consulted by the decision makers – they are much more sophisticated than that. The maths model consistently returned more accurate predictions with worst case on death numbers error below 7% at all stages – this is the only hard number my friend was willing to disclose. All published models, either from within or without China which have appeared have been comparatively checked with the decision models and found to be inferior, usually by a lot.
  7. Future of the models?
    There is very little chance of seeing these decision models published, not any time soon. My friend’s words: “We should not publish when there is an atmosphere in which such a publication might result in extra-science interpretations and uses” and such an atmosphere will linger for a long time, by the looks of it. I read the CCP propaganda as well as the stuff coming out of our government and can see this stuff buried deep for long. However the modellers continue to work on data from the wider world now and the government continues to listen to them. One difference between China and much of the rest of the world is that the scientists cannot just tell the government the science says this and that without providing evidence, as the members of the government can understand scientific evidence at an academic level. And they organize multiple teams to work against each other to form a peer-review like environment from the start.
  8. Protection of medics is a key factor
    The most important issue, highlighted by the models and tested in real life, is the protection of the medics. Initially the disaster was when Wuhan people crowded general-purpose hospitals where the medics were not protected. When the external teams went to Wuhan+Hubei they were well prepared and formed special-purpose facilities which had a far greater success rate with next to zero infection of medics. Although this is intuitive, the actual numerical differences made in the deaths was unintuitively large.
  9. Ventilators is a last resort when it’s 20% survival chance left.
    One of the little-publicized facts is that the starting and ending procedures of ventilator use on a patient (putting them on/off the machine) represents the standing-out worst point for medic infections. This has caused a reluctance in China of using ventilators and the threshold for their use is set quite high, leading to ventilated patients having only a 20% rate of survival – if you are not already dying you are not ventilated. So they are a bit fascinated by the current western thing about seeing ventilators as some sort of almighty saviour, esp. given the current suboptimal PPE state for medics in an environment of retired medics (presumably not young) re-joining service.
  10. Masks, hand washing – NOT to be neglected 
    On how to protect ourselves, my friend emphasizes mask wearing and hand washing – diligent mask wearing and hand washing mimics the fangcang regime to some degree. Contrary to common belief, the wearing of even three-ply surgery masks protects not only the environment from the wearer but also the wearer from the environment, and N95 masks are indeed better. He became a bit rhetorical and urged us to disregard imagined stigmatization to prioritize life, both our own and that of those who may stigmatize us.

A 12 Day battle with COVID-19 of my colleague – in mid 40s and fit.

My close colleague Professor Patrick Degenaar

https://www.ncl.ac.uk/engineering/staff/profile/patrickdegenaar.html

has just sent his report. With his permission I am pasting it here.

“I’ve now basically recovered from what I believe (it’s impossible to get a test) to have been a COVID19 infection.

Just so you know what you have to look forward to in the future, I kept a brief symptoms diary:

Day 1:   Very slight ache in joints

Day 2:   Asymptomatic

Day 3:   Tired, lethargic, dizzy, and out of breath

Day 4:   Reduced symptoms compared to day 3. Started to assume it was getting better.

Day 5:   Morning felt almost fine. Then afternoon: Very tired, very out of breath, heart palpitations, Mild temperature = 37.5C

Day 6:   Reduced symptoms compared to day 5, but still very tired and dizzy. New symptom: a chest pain – like a claw embedded in the chest.

Day 7:   Similar to day 6, but also developed an occasional dry cough

Day 8:   Much worse – extremely tired, very out of breath. Climbing the stairs felt like climbing Everest. Feeling like very bad high-altitude sickness. A feeling of nausea (just like bad high-altitude sickness)

Day 9:   Similar to day 8

Day 10: Starting to get better similar to day 5

Day 11: Starting to feel much better. Can ascend stairs without getting out of breath. But still tired and dizzy.

Day 12: almost OK, but still need periodic Siestas

Stay safe!”