Nasuni today made generally available a storage system for edge computing environments that is now integrated with the Amazon Simple Storage Service (S3) to enable organizations to store data both locally and in the cloud.

Lance Shaw, director of product marketing for Nasuni, said Nasuni Edge for Amazon S3 makes it possible to store files locally at the point where data is being processed and analyzed while taking advantage of an object-based cloud service to centrally aggregate data from multiple edge computing platforms using a global namespace based on an Amazon S3 bucket. That capability makes it possible to make cloud storage services available to an existing application without having to refactor it, noted Shaw.

That approach also provides a method for backing up and recovering data in the event of an outage or cyberattack that makes local data unavailable until ransomware demands are met, he added.

Nasuni Edge for Amazon S3 gives IT teams the option of enabling applications to read and write data using the Amazon S3 API to access AWS Local Zones, AWS Outposts and on-premises environments. Local caching capabilities, meanwhile, provide LAN-like performance to applications running at the edge using Server Message Block (SMB) or Network File System (NFS) protocols.

File management can also be extended based on the number of tags, the number of characters in tags and the size of file metadata used to classify data. That capability will be increasingly crucial as more applications infused with artificial intelligence (AI) models are deployed at the network edge, noted Shaw.

As the number of applications deployed at the edge steadily increases, so does the amount of data that needs to be analyzed. The aggregate results of data that may be processed locally typically need to be shared with an analytics application running in the cloud or on-premises data center. Nasuni Edge for Amazon S3 provides IT teams with a storage architecture that makes it simpler to achieve that goal, said Shaw.

Of course, it’s not always clear which team is managing edge computing environments. In many instances, operations technology (OT) teams are responsible for applications running, for example, on a factory floor. The one thing that is certain is as more data is collected and processed at the edge, there is a greater need to ultimately share data with a range of applications managed by an IT team. The challenge is finding a way to facilitate that sharing in a way that disrupts existing OT workflows as little as possible.

There may soon come a day when more data is being processed at the network edge than there is in the cloud as organizations increasingly bring computing to where data resides rather than always having to move raw data over a wide area network (WAN) to process it in the cloud. However, eventually, all the data being processed and analyzed at the edge will need to move elsewhere simply to make room for more data that is now being continuously processed and analyzed on edge computing platforms that, with each iteration, are becoming as robust as any server residing in a data center.

Image source: Photo by Francesco Ungaro on Unsplash