By Dan Sheridan
Microsoft’s Azure cloud platform may be approaching the 10th anniversary of its February 2010 launch, but as an ever-evolving entity that has helped reshape the way businesses manage data, a constant stream of new questions was always going to be inevitable.
While Azure Storage cuts to the chase when it comes to state-of-the-art data solutions, there is a myriad of multi-layered factors to consider when broaching the subject of modern services such as this.
Across almost every sector, companies, organizations and authorities of all shapes and sizes are benefitting from Azure’s flexibility in relation to analytics, networking, virtual computing and much, much more.
The Azure armory was bolstered by the introduction of their storage solutions, and terms such as Blob Storage have more recently become part of tech vernacular among the global community who use the service.
In the words of Microsoft themselves: “Azure Storage is Microsoft’s cloud storage solution for modern data storage scenarios,” and through Azure Storage Explorer – a GUI-based tool – that data can now be accessed quicker and with greater ease.
If this sounds like it could be of invaluable use to your business but more than a little daunting, or you’re an existing user with a list of queries as long as your arm, the following may be of some use as we look at a range of common questions surrounding Azure Storage.
Microsoft Azure Storage is made up of four types of data services, namely Azure Blobs, Files, Queues, and Tables. Each are accessible via a previously-created storage account and offer platforms for a range of data. Let’s run through them.
Optimized for storing huge amounts of unstructured data, Blob Storage is set up to deliver a host of tasks, including storage of files for distributed access, streaming audio and video, and storing data for back up and restore as well as analysis by on-premises, Azure-hosted services.
Data can be accessed by user or client applications from anywhere in the world through HTTP/HTTPS, and it also supports Azure Data Lake Storage Gen2 – Microsoft’s enterprise big data analytics solution for the cloud.
This option facilitates managed file shares via the cloud that can be launched simultaneously with Windows, Linux, or macOS. Accessible from the Server Message Block protocol, Azure file shares can be cached on Windows Servers with Azure File Sync.
The ease with which applications can be lifted and shifted to the cloud has been warmly received by users, as has the ability to mount Azure file shares from anywhere on the planet and on the most popular operating systems.
Flexibility and application compatibility is not a worry in terms of shared access, and the “always on” set up means on-premises network issues are a thing of the past.
Queue storage is configured to store large numbers of messages, which can be accessed from anywhere using HTTP or HTTPS via authenticated calls. Viewed as short-term storage, messages can exist in a “live” state for up to seven days – the default time-to-live period.
Messages can be up to 64KB in size, but a queue can contain literally millions of messages if the capacity limit allows it, and is a popular way to create stockpiles of work that require processing individually.
Once handled, messages can be recycled or stored on a case-by-case basis.
This is your go-to for large amounts of structured data – a NoSQL store that can handle authenticated calls both internal and external of the Azure cloud. In essence, we’re talking terabytes of data built to serve web-scale applications.
The beauty lies in its ability to scale as demand increases, and Microsoft has recently launched a “premium offering” for table storage, in the shape of the Azure Cosmos DB Table API, which offers “throughput-optimized tables, global distribution, and automatic secondary indexes”.
Flexible datasets, such as address books, user data for web or device information, are also easily stored.
In a word, yes. Azure Storage currently offers several accounts that each come with their own pricing structure and features. It’s up to you to decide which option would best suit your applications and needs.
Options include two versions of a general purpose account – one that is widely used and recommended for basic storage and another legacy account type. Microsoft also offers blob-only, premium Block Blob Storage, and FileStorage/Blob storage accounts that do exactly what their names suggest.
As you can imagine, the answer to this is determined in part by the user’s requirements, and many solutions exist depending on factors such as data size, the transfer frequency required and how much bandwidth you have at your disposal.
Data movement itself is also in keeping with Microsoft’s push to present the user with as many options as possible. Offline transferring for one-time bulk data can be carried out using physical devices, whereas network transferring opens up another range of choices entirely.
They include a graphical interface for those who sporadically transfer files without the need to automate data, scripted or programmatic transferring through the relevant tools, and on-premises devices – which can be provided by Microsoft – that live in your datacenter and cultivate your transfers for maximum optimization.
There’s also a useful data transfer feature, which can be found within your Azure Storage account, that will provide a recommended transfer solution based on a variety of variables relating to your bandwidth, data size and transfer frequency needs.
Azure managed disks are virtual hard disks. As Microsoft themselves put it: “You can think of it like a physical disk in an on-premises server but, virtualized. Azure managed disks are stored as page blobs, which are a random IO storage object in Azure. We call a managed disk ‘managed’ because it is an abstraction over page blobs, blob containers, and Azure storage accounts. With managed disks, all you have to do is provision the disk, and Azure takes care of the rest.”
The crux of this side of things is that when you opt to use Azure managed disks, the platform creates and then manages the disk on your behalf. There are also some trademark options (mainly SSD and HDD-based) to choose from, while availability clocks in at 99.9% thanks to a three-tiered replica system.
In essence, this means that if one or even two of your replicas experience technical issues for whatever reason, the remaining replicas kick in to ensure your infrastructure remains in place. It’s also the driving factor behind Microsoft’s 0% annualized failure rate.
It’s probably worth starting this section with the answer you’re looking for, from the horse’s mouth. “All data (including metadata) written to Azure Storage is automatically encrypted using Storage Service Encryption (SSE),” says Microsoft.
And for a more detailed breakdown of the risks involved for misconfigured Azure Storage accounts, Gregg Rodriguez, Senior Consultant, Content and Communications Strategy, at infrastructure security experts CloudPassage has compiled the following words of wisdom (and warning) in relation to:
“Public cloud computing requires a new approach to security,” he continues. “In the Azure environment, Microsoft provides a secure foundation across physical, infrastructure, and operational security, while you maintain responsibility for protecting the security of your application workloads, data, identities, on-premises resources, and all the cloud components that you control. This is referred to as the ‘Shared Responsibility Model’.”
Fill out the form below and download a copy of the full Microsoft Azure Salary Survey report, including more information about the Azure community and salary data from thousands of Azure professionals around the world.
Iscriviti per ricevere suggerimenti su Microsoft Azure e Dynamics