A new EPSRC grant in the field of Async Circuits + Tsetlin Machine awarded!

I am happy to announce that I have been awarded a new EPSRC grant – technically it is UKRI & RCN (Research Council of Norway) project – UKRI-RCN: Exploiting the dynamics of self-timed machine learning hardware (ESTEEM).

I am very excited to work on it with my two Newcastle colleagues Rishad Shafik and Domenico Balsamo, Uni of Agder colleague Ole-Christoffer Granmo, and in close collaboration with PragmatIC, Mignon and CFT.

More details on this project can be found here.

My Keynote at DESSERT 2023

I had a pleasure to present a keynote talk at the 13th International Conference Dependable Systems, Services and Technologies (DESSERT 2023), held in Greece, Athens, October 13-15, 2023, in a hybrid mode.

The talk’s topic was “Tsetlin Machines:  stepping towards energy-efficient, explainable and dependable AIhttps://www.dessert-conf.org/dessert-2023/alex-yakovlev/

The PDF of the slides can be found here

Talking at the AI Workshop held at the Center for AI Research (CAIR) at University of Agder, Norway

I was invited to University of Agder, in the South of Norway (in a nice town called Grimstad, famous for the presence of Henrik Ibsen and Knut Hamsun), to present my vision on what kind of hardware do we need for pervasive AI. This presentation was part of a workshop organised by Prof Ole-Christoffer Granmo, Director of CAIR, on the occasion of the grant opening of CAIR – https://cair.uia.no

In my presentation I emphasized the following points:

  • Pervasive Intelligence requires reconsidering many balances:
    – Between software and hardware
    – Between power and compute
    – Between analogand digital
    – Between design and fabrication and maintenance
  • Granulation phenomenon: Granularity of power, time, data and function
  • Main research questions:
    – Can we granulate intelligence to minimum?
    – What is the smallest level at which we can make cyber-systems learn in terms of power, time, data and function?
  • Grand challenge for pervasive hardware AI:
    To enable electronic components with an ability to learn and compute in real-life environments with real-power and in real-time
  • Research Hypothesis:
    We should design systems that are energy-modulated and self-timed, with maximally distributed learning capabilities

I put a strong hypothesis on the role of using Tsetlin Automata (Automata with Linear Tactics) for building electronics with high-granularity learning capabilities.

The key elements of the proposed approach are:

  • Event-driven, robust to power and timing fluctuations
  • Decentralised TsetlinAutomata (TAs) for learning on demand
  • Mixed digital-analogcompute where elements are enabled and controlled by individual TAs
  • Natural approximation in its nature, both in learning and compute
  • Asynchronous logic for h/w implementation

The full set of my slides is here: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/AlexYakovlev-AI%20Hardware-070219.version3.pdf

My Talk at the RAEng Fellows Day at Newcastle

I was invited to give a talk on my Research at the Royal Academy of Engineering event, held in Newcastle on the 28th January 2019.

The title of the talk is “Asynchronous Design Research or Building Little Clockless Universes

The PDF of the slides of my talk are here: http://async.org.uk/presentations/AlexYakovlev-Research-RAEngEvent-280119.pdf

I only had 15 minutes give to me. Not a lot to talk about the 40 years of research life. So, at some point in preparing for this talk, I decided that I’ll try to explain what the research in microelectronic systems design is about, and in particular how my research in asynchronous design helps it.

Basically, I tried to emphasize on the role of ‘time control’ in designing ‘little universes’, where the time span covered by our knowledge of what’s is going on in those systems and why is between 1 few picoseconds (transistor switching event) and hours if not days (applications life times). So we cover around 10^18 events. How does it compare to the life of universe – being “only” around 10^13 years. Are we as powerful as gods in creating our ‘little universes’.

So, in my research I want to better control TIME at the smallest possible scale, surprisingly but, by going CLOCK-LESS! Clocking creates an illusory notion of determinacy in tracking events and their causal-relationship. Actually, it obscures such information. Instead by doing your circuit design in a disciplined way, such as speed-independent circuit design, you can control timing of events down to the best levels of granularity. In my research I achieved that level of granularity for TIME. It took me some 40 years!

But, furthermore, more recently, say in the last 10 years, I have managed to learn pretty well how to manage power and energy also to that smallest possible level, and actually make sure that energy consumption is known to the level of events controlled in a causal way. Energy/power-modulated computing, and its particular form of power-proportional computing, is the way for that. We can really keep track of where energy goes down to the level of a few femto-Joules. Indeed if a parasitic capacitance of an inverter output in modern CMOS technology is around 10fF and we switch it at Vdd=1V, we are talking about minimum energy quantity of CV^2=10fJ= 10^-14J per charging/discharging cycle (0-1-0 in terms of logic levels). Mobile phones run applications that can consume energy at the level of 10^4J. Again, like with time we seem to be pretty well informed about what’s going on in terms of energy covering 10^18 events! Probably, I’ll just need another 5 or so years to conquer determinacy in energy and power terms – our work on Real-Power Computing is in this direction.

Now, what’s next, you might ask? what other granularification, distribution and decentralization can we conquer in terms of building little universes!? The immediate guess that comes to my mind is the distribution (in time and energy directions) of functionality, and to be more precise intelligence. Can we create the granules of intelligence at the smallest possible scale, and cover same orders of magnitude. It is a hard task. Certainly, for CMOS technology it would be really difficult to imagine that we can force something like a small collection of transistors dynamically learn and optimize its functionality. But there are ways of going pretty close to that. One of them seems to be the direction of learning automata. Read about Tsetlin automata, for example (https://en.wikipedia.org/wiki/Tsetlin_machine) , in the recent work of Ole-Christoffer Granmo.

 

 

 

 

Asynchronous drive from Analog

Run smarter – Live longer!

Breathe smarter – Live longer!

Tick smarter – Live longer!

I could continue listing these slogans for designing better electronics for the era of trillions of devices and peta, exa and zetta bits of information produced on our small planet.

Ultimately it is about how good we are in TIMING our ingestion and processing of information. TIMING has been and will always be a key design factor which will determine other factors such as performance, accuracy, energy efficiency of the system and even productivity of design processes.

As computing spreads into periphery, i.e. it goes into ordinary objects and fills the forms of these objects like water fills the shape of the cup, it would be only natural to think that computing at the peri or edge should be more determined by the nature of the environment rather than rules of computer design dominated the by-going era of compute-centrism. Computing for ages has been quite selfish and tyranic. Its agenda has been set by scaling the size of semiconductor devices and growing complexity of digital part. This scaling process had two important features. One was increasing speed, power consumption which has led to an ongoing growth in data server capacity. The other feature was the only way to manage complexity of the digital circuitry was to use clock in design to avoid potential racing conditions in circuits. As computing reaches the peri it does not need to become as complex and clocky as those compute-centric digital mosters. Computing has to be much more environment friendly. It has to be amenable to the conditions and needs of the environment – otherwise it simply won’t survive!

But the TIMING factor will remain! What will then drive this factor? It won’t certainly only be the scaling of devices and drive for higher throughput by means of clock – why? for example, because we will not be able to provide enough power for that high throughput – there isn’t enough lithium on the planet to make so many batteries. Nor we have enough engineers or technicians to maintain replacing those batteries. But on other hand we don’t need clock to run the digital parts of those peri devices because they will not be that complex. So, where will TIMING come from? One of natural ways of timing these devices is to extract TIMING directly from the environment, and to be precise from the ENERGY flows in the environment.

We have always used a power supply wire in our electronic circuits. Yes, but we have always used it as an always-ON servant, who had to be there to give us 5 Volts or 3 Volts, or more recently 1 Volt or even less (the so-called sub-threshold operation) like 0.4 Volts. That wire or signal has never been much of a signal carrying information value. Why? Well because such information value was always in other signals which would give us either data bits or clock ticks. Today is time to reconsider this traditional thinking and widen our horizon by looking at the power supply signal as a useful information source. Asynchronous or self-timed circuits are fundamentally much more cognizant of the energy flow. Such circuits naturally tune their tick boxes to the power levels and run/breath/tick smarter!

At Newcastle we have been placing asynchronous circuits at the edge with the environment into analog electronics. In particular, it has been power regulation circuits, A-to-D converters and various sensors (voltage, capacitance, …). This way allows significantly reduce the latencies and response times to important events in the analog, reduce sizes of passives (caps and inductors), but perhaps most importantly, thanks to our asynchronous design tools under Workcraft (http://workcraft.org) we have made asynchronous design much more productive. Industrial engineers in the analog domain are falling in love with our tools.

More information can be found here:

http://async.org.uk

https://www.ncl.ac.uk/engineering/research/eee/microsystems/

 

Talk about Asynchronous Design for IoT at the ALIOT Workshop

An ErasmusPlus-funded project ALIOT “Internet of Things: Emerging Curriculum for Industry and Human Applications” http://aliot.eu.org held its workshop in Newcastle on 9-11th July 2018.

I gave an invited talk on “Asynchronous Design for IoT” where I also showed retrospectively some history of developments in the field of asynchronous system design where I have been involved for nearly 40 years, first in St Petersburg and then in Newcastle.

The slides of my talk can be found here: https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Asynchronous%20Design%20for%20IoT%20-AlexY%20-%20ALIOT2018.pdf

 

My keynote at Norwegian Nanoelectronics Network Workshop – 13 June 2018

I attended a high stimulating networking workshop in Norway – called Nano-Network

http://www.nano-network.net/workshop/

It was held in an idyllic place on the island called Tjome – south of Oslo.

Lots of excellent talks. Here is the programme:

http://www.nano-network.net/wp-content/uploads/2018/06/Workshop-programme-2018.pdf

and I gave my invited talk on “Bridging Asynchronous Circuits and Analog-Mixed Signal Design”. Here are the slides:

https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/talks/Nano-Micro-2018-Yakovlev-short-no-animation.pdf

The whole event was highly stimulating, with exciting social programme. Challenging adventure towards Verdens Ende (World’s End) with lots of tricky questions and tests on the way. Our team did well … but we weren’t the winners 🙁

 

IoT Technology Market 2017-2022 Prognosis

Quoting the recent Research and Markets store:

“Internet of Things Technology Market by Node Component (Processor, Sensor, Connectivity IC, Memory Device, and Logic Device), Network Infrastructure, Software Solution, Platform, Service, End-use Application and Geography – Global Forecast to 2022”

https://www.researchandmarkets.com/research/hld477/internet_of

IoT technology market expected to grow at a CAGR of 25.1% during the forecast period”

“The IoT technology market is expected to be valued at USD 639.74 billion by 2022, growing at a CAGR of 25.1% from 2017 to 2022. The growth of the IoT technology market can be attributed to the growing market of connected devices and increasing investments in the IoT industry. However, the lack of common communication protocols and communication standards across platforms, and high-power consumption by connected devices are hindering the growth of the IoT technology market.”