Among the many forces for change we are seeing in the automotive industry is an increase in the desire to ‘digitise’ business – ensuring that all aspects of an enterprise operation are represented by data which is fresh enough to act appropriately on with immediacy and confidence.
In doing this, the business has the opportunity to take on attributes of data- and software-based industry sectors – fast iteration, fast time-to-market, accelerated innovation delivery – while retaining and enhancing their domain knowledge and competence in their market sector, as well as taking advantage of new revenue streams which can be created at the boundaries of the physical and digital worlds.
This is a journey we see frequently, and one we are helping our strategic customers and partners along. That journey starts with mastering your data, then understanding how to exploit that data to extract insights and value, before rewiring the business to take advantage of the domains of optimisation that this affords, reacting swiftly and affordably while containing risk. This explains why digitisation is combined with business change in the overused epithet of ‘digital transformation’. Despite being an overused term, the practice is real and achieved by some enterprises, and will be necessary to adapt to the shifts in the industry enabled by an increasingly connected world, driven by the need to adopt sustainable models for transportation.
However, many actors in the industry are still stuck on the first step, meaning that the process is worthy of more examination. I would suggest there are four aspects to this approach.
Making data collectable is the first step. The obvious challenge is sensing the environment and getting everything connected in some form. There are also more subtle and entrenched issues to deal with, such as ensuring that you have rights and access to data which you believe you own, or at least have an established right to curate. There are many cases from the world of software packages with opaque data, software-as-a-service offerings, and ‘Shared Consumer Internet’ solutions where this turns out to be less than one might think.
The second challenge concerns collecting data with the most appropriate fidelity, considering aspects such as resolution, sample rates, which events are captured, and what aggregation and filtering can be done along the process chain to enhance efficiency while not losing meaning. The goal in most cases is ‘best possible fidelity while affordable and rational’.
The third task is to make data coherent and usable, which covers issues such as ensuring that data is fully described with appropriate metadata irrespective of its source. If I don’t know whether a temperature reading is in Celsius or Fahrenheit, that’s a problem. But if I am measuring airflow temperature and one sensor is in relatively free air and the other is next to a heat source, I also need to know that so that I can model my way back to transformed, usable data.
The fourth area is risk. Most entities understand today that collecting data is not risk-free, with GDPR and similar regulations requiring thought to be given to data that contains personally identifiable information, but also to that which can be combined to allow behaviour or identity to be inferred. There’s also the issue of data ownership versus curation. Just because you are collecting data from an asset you operate, manufactured or own does not make it your data, and you may be assuming rights over this data which you will not ultimately be found to have.
The ownership and governance aspect of data will be sorted out over the next several years, and many commentators believe that rights are likely to move towards the end user or owner, whichever is more appropriate given the business model in question. This will lead to a more equitable and open environment for valuing data realistically and should enable widespread data sharing and exchange powered by net equivalent microtransactions. For now, the best advice is to maintain flexibility and security, keep significant metadata in your data stores to address future claims and ensure that privacy is at the heart of how you manage data.
What happens when you do all this, and start to put it to use? At Microsoft, we like to talk about the ‘digital feedback loop’. This concept proposes that when collecting significant amounts of appropriately managed data across all aspects of an enterprise, not only can this data be used to locally optimise different aspects of a company’s operation, but also combined to increase insight into the effectiveness and performance against mission and goals for the entire business, and even to model what form that business should take in the future.
More, higher quality data is going to be collected, curated, evaluated and exchanged over the next few years. Organisations’ choices in digital infrastructure need to take that into account, in terms of their abilities to manage, transform, extract value and insight from, and secure and privacy-enable data at scale and at speed, as close to real-time as possible. Solutions such as connected vehicle enable this trend; solutions such as connected customer, factory and supply chain demand it; and solutions such as autonomous driving and advanced driver assistance systems are expanding it, as even more data needs to be collected at the vehicle just to make these capabilities possible. The potential reuse of this data through recombination and analysis by and for multiple stakeholders, especially in an urban environment, is almost immeasurable.
This is the first in a series of articles by John Stenlake, automotive lead for EMEA at Microsoft
This article was originally published in the Spring 2020 issue of The Record. Subscribe for FREE here to get the next issues delivered directly to your inbox.
Share this story