Video is already the biggest consumer of cloud storage, compute and bandwidth in the world. So anything that can reduce its gargantuan appetite for resources could save a lot of time and therefore money. A lot of it.

Enter Fujitsu with a new compression algorithm that, it says, can reduce the size of a video that needs to be interpreted by 90%; the not-so-insignificant caveat is that the resulting video can only be interpreted by AI (artificial intelligence) because of the level of degradation.

The key aspect of the new technology developed by the scientists at the Japanese firm is that it automatically analyzes areas within an image that AI prioritizes and compresses data to the minimum size that AI can recognize.

Great for the cloud

This, Fujitsu added, “will allow users to analyze more advanced video data by combining multiple video data stored in the cloud, sensor data, and performance data such as sales data”, all without any increased data transmission demands.

The rise of ultra high resolution cameras on smartphones (the Samsung S20 Ultra has a 108 megapixel camera) and 4K security CCTV cameras make such technologies unavoidable.

In practice, the compression would be done at the edge, on the device itself, using a specialist chip, with the recognition bit being done in the cloud, and the two joined up in a continuous feedback loop.

Fujitsu will commercialize this technology to third parties by the end of fiscal 2020, and introduce it into a variety of applications for different industries.