By now, we’re all well used to cloud computing, and recognise the many ways in which cloud computing benefits businesses and makes our everyday lives easier. Could edge computing have a similar impact?
What is edge computing?
Edge computing refers to the processing of data on devices such as smartphones. Unlike in cloud computing, where the data is processed in remote, far-off data centres, edge computing gives devices the ability to carry out some or all of the data processing right then and there, at the point at which the data is collected.
This is all possible because devices are getting more and more powerful (in part thanks to AI), meaning they can handle more data processing tasks. In other words, the device no longer has to send every little piece of data – whether it’s useful or not – to the cloud.
Think of all the data an office security camera gathers in the course of a night. Hours and hours of footage, with the vast majority of that footage showing empty corridors and rooms. Sending all of that data, which probably has little or no value, is a waste of bandwidth. But an AI-equipped security camera, which has the ability to analyse images right then and there, would be able to detect unusual activity and prioritise that data.
Key benefits of edge computing
Let’s look at the biggest advantages edge computing brings:
1. Saving bandwidth
The proliferation of smart devices means we’re creating an extraordinary amount of data. But not all of that data is critical. Revisiting our security camera example, if you have multiple cameras on a site, and each one is constantly streaming data to the cloud, then that’s using a lot of bandwidth for potentially not very useful data. But if the cameras are intelligent enough to process the data at its source, they can stream only the most important footage to the cloud, while discarding the rest.
2. Reducing latency
Another advantage of devices being able to sort critical data from the not-so-critical data is a reduction in latency (i.e., the time it takes to send data and receive a reply). With cloud computing, the device may be sending information to a data centre on the other side of the world for processing, and this often results in a brief delay. This doesn’t always matter; for example, most of us don’t mind that it typically takes Alexa a few seconds to reply to our question about today’s weather.
But that lag time is far less acceptable in the context of, say, a self-driving vehicle out on the road. If another car runs a stop sign, do you really want your autonomous vehicle to have to send that sensor and visual data to the cloud, then wait for a decision on what to do next? Not so much. With edge computing, critical data – data that’s absolutely vital to real-time decisions – can be processed on the spot, resulting in faster decisions—the closer the processing, the quicker the response time, essentially. Meanwhile, the data that’s not so time-critical (for example, fuel performance data) can be sent to the cloud for later analysis.
3. Enhancing security and privacy
Edge computing reduces the amount of data that has to travel over a network, which is an obvious bonus from a security perspective. There’s also the fact that data is distributed (in this case, located on multiple user devices) as opposed to being stored in one place. This is all good news, providing manufacturers of smart products make securing that local data a key priority.
What about privacy? In theory, with less data being uploaded to the cloud and more data being processed on the device, users of smart devices will have greater control over their data. Imagine, if your Amazon Echo speaker is able to process and respond to your weather forecast request without that data being sent to a central Amazon server, then that’s one less bit of data the company has about you. That’s the idaea, anyway. In reality, companies are unlikely to give up their vice-like grip on something as valuable as user data. But as edge computing evolves, we may (if we’re lucky) see more options for opting out of sending our data to the cloud.
A potential pitfall to be aware of
That’s the positives taken care of. What about the negatives? In my mind, there’s one potential downside to edge computing: namely, that important data could end up being overlooked and discarded in the quest to save bandwidth and reduce latency.
Data that isn’t vital for real-time decisions may have other uses. For example, if an autonomous vehicle is travelling along an otherwise empty road, it may seem like that visual and sensor data is pointless. What can be learnt from an empty road? Quite a lot, potentially. That seemingly useless data could still provide information on road conditions and how the vehicle behaves under those conditions – and this can help regulate other autonomous vehicles travelling the same route in the future. A balance is needed between maximising the opportunities provided by edge computing while still recognising the value of data.