Your data won’t stay smart forever!

ReComp is about preserving value from large scale data analytics over time through selective re-computation

  • Data analytics is expensive to perform on a large scale
  • It yields value, for example in the form of predictive models that provide actionable knowledge to decision makers
  • However, such value is liable to decay with time, following changes in both the underlying data used in their processing, and the evolution of the processes themselves.

Deciding when such knowledge outcomes should be refreshed, following a sequence of data change events, requires problem-specific functions to quantify their value and its decay over time, as well as models for estimating the cost of their re-computation.

We envision a decision support system, which we call ReComp, that incorporates these functions and can use them to make informed re-computation decisions in reaction to any of these changes.

ReComp takes the form of a meta-process that control an underlying resource-intensive process, P:

ReComp-loop
The ReComp meta-process

ReComp aims to help decision makers react to changes in data, by allocating a re-computation budget (cloud resources, time, money) in a way that optimises the use of the budget vis a vis the expected increase in the value of the outcomes chosen for refresh.

What makes ReComp challenging to realise is the ambition to make it both generic and customisable.

We are going to validate our approach on two very different case studies:

  • Genetic diagnostics through NGS data processing
  • Newcastle’s Urban Observatory: predictive models from smart city data obtained from multiple, diverse sensors, specifically to study the Urban Heat Island effect in large cities

Read more:  [Forwards and Backwards ReComp]  [The ReComp vision]