Customers and apps are embracing cloud storage, we see this across many different businesses and industries. Since managing multiple cloud services can be complex due to all their differences, we set out to simplify this. We also have heard that customers really don’t want to bank everything on a single vendor’s cloud, especially their business data!
We launched our Zenko Multi-Cloud data controller earlier this year to provide a platform to simplify the lives of developers who are building applications requiring multi-cloud storage. The basic idea is if you know the AWS S3 API, you can use it to access any of the public cloud storage services supported by Zenko. This (of course) includes AWS S3 itself, but also cloud storage services that don’t natively support the S3 API such as Azure Blob Storage (supported now) and Google Cloud Storage (coming soon). Any conversation of multi-cloud should also consider private clouds, so we can include on-premises object storage as another “cloud” to be managed through Zenko.
So how can we take advantage of multiple clouds from an application or business perspective? There are a number of things that surface as potentially valuable, such as:
Reversibility: can I take my data back on-premises or move to another cloud?
Data proximity: How to take advantage of compute in the cloud, can I easily move my data to be close to the service that needs it?
Cost: How to control the cost of cloud storage, since there are differences across vendors, and also to make sure I don’t get trapped or locked in.
Where is my data: how do I search over everything in this cross-cloud content depot?
Durability: is it enough to store data in one-cloud, or can I take advantage of storing across two clouds (more on this one later!)
As introduced in our other blogs, Zenko has four basic capabilities to help simplify cloud storage:
A single API that works on all clouds, including AWS, Azure & Google
Data is stored natively in each cloud, so it can be accessed directly
Data workflows based on policies, across clouds
Finding data simply across multiple clouds
Simplified Multi-Cloud Management through the Zenko Orbit Portal
All of these features are powerful, but again the key question is how can we make this extremely simple to use? For this reason, we now have introduced Zenko Orbit, a cloud based portal that makes multi-cloud storage as simple as “point-n-click”.
Getting started is easy: just use your Gmail account to register and login. The first thing we simplify is starting up a cloud instance of Zenko itself. If you already have a Zenko instance created, just enter your InstanceID and Orbit will connect to it.
The Orbit dashboard provides useful information about your aggregate capacity utilization, and breaks it down across the various clouds. You can also easily manage your cloud storage accounts and credentials, plus monitor the Zenko instance resources and performance indicators.
Once these simple actions are completed, you are ready to use Zenko to access multi-cloud storage services. Orbit will soon offer an integrated data (object) browser, for easy upload/download of objects into these target clouds. In addition, we’re working on some very simple capabilities to provide additional business value from multi-cloud storage through Orbit.
In our next blog, we’ll explore a super interesting and high-potential use case for storing replicated objects (data) across two clouds, and what this can do for your data durability, availability – and of course ultimately, what does it mean for cost?
It’s hard to go anywhere lately without hearing the term Multi-cloud. But what does multi-cloud really mean for storage? Is it a new fancy word to replace “hybrid cloud”? Stay with me while I try answer these questions and share our definition of Multi-cloud and why we created, Zenko, an open-source Multi-cloud Data Controller.
In our vision, multi-cloud is an acknowledgment that the enterprise world is application centric and that each application has its own infrastructure needs that actually evolve over time. It’s only natural to want the freedom to leverage multiple, different cloud infrastructures at the same time and over time.
When we say Multi-cloud, it actually applies both to private clouds and public clouds. There’s a need to easily and transparently use different clouds based on their strength because in reality, AWS, Azure or Google Cloud each have their own area of expertise.
Multi-cloud is different than “hybrid” because it takes into consideration that an enterprise runs hundreds of different applications. Hybrid is more focused on tiering of old or lower value data to the cloud while Multi-cloud is about optimizing workflows and using the right tool for the right job at the right time. What we heard is that customers still like to manage storage locally in their own data centers, but need to use the cloud to leverage the native services they offer. This requires data mobility between clouds, whether private or public cloud services.
Multi-cloud also includes a notion of freedom: I am in front of customers frequently and one of the recurring topic is about not being locked into a specific cloud platform, whether it’s public or private. True freedom and data mobility can only arise if different cloud platforms uses the same communication protocols and share common abstractions to describe containers, objects, metadata and authentication credentials.
We do not see any initiative from the large public cloud providers or the numerous software defined storage vendors going in that direction so this is why we decided to start working on Zenko last year and are happy to announce that we’re making our source code available today as a set of open community projects on Github under an Apache 2.0 license.
Zenko is a Multi-cloud Data Controller and focuses on 4 pillars:
AWS S3 API —Single API set and 360° access to any cloud Gives developers an abstraction layer to enable freedom to use any cloud any time Single unifying interface using the S3 API, supporting multi-cloud backend data storage, both on-premises (Scality RING and Docker) and public cloud with AWS S3, Microsoft Azure Blob Storage (and Google Cloud Storage to come soon)
Native Format Data written through Zenko is stored in the native format of the target storage and can be read directly, without the need to go through Zenko. Therefore, data written in Azure Blob Store or in Amazon S3 can leverage the respective advanced services of these public clouds.
Data workflow Policy-based data management engine used for seamless data replication, data migration services or extended cloud workflow services like cloud analytics and content distribution (available in September)
Metadata search Provides the ability to subset data based on key attributes. Interpret petabyte-scale data and easily manipulate it on any cloud to separate high-value information from data noise
Zenko focuses on ease of use and operation and relies on Docker Swarm for deployment and high availability. It runs as a set of containers either locally or in the cloud, anywhere that Docker can run, be it a laptop, physical servers or any existing cloud provider.
Please head to our community website, zenko.io, to learn more about Zenko and its architecture, look at how to contribute or download and use this new open source Multi-Cloud Data controller today.
We’ve recently debuted Scality S3 Server, our new S3 API implementation — the same as we use for our RING storage platform, open sourced and conveniently packaged in a lightweight Docker container.
You can download Scality S3 Server in less than 3 minutes and run it right on your laptop using local storage. That makes it the fastest and easiest way for developers to break new ground with object storage.
To give you an even faster start, we’ll be part of another ground-breaking innovation: the Docker Store, a new marketplace for validated “dockerized” software. We’re proud to announce that Docker has chosen Scality S3 Server as one of its first featured apps!
What makes the Docker Store so compelling is that it will offer developers and enterprises the same simplicity and confidence that consumers enjoy with Apple’s or Android’s app stores—namely, fast and easy downloading of vetted, validated, and user-rated software.
Thanks to Docker’s popularity and large audience base, we expect our presence on the Docker Store to attract a lot of interest and plenty of S3 Server downloads. As production-grade code, it offers something you can’t get anywhere else: a complete develop-to-deploy experience with object storage. You can build and test applications on your laptop or local server, and then deploy them at Big Data scale—on premises or on Amazon Web Services—without changing a single line of code.
Industry commentators are singing the praises of Scality S3 Server, and we think you will too. We know you’re going to do great things with it, and we can’t wait to see them. So go ahead: download Scality S3 Server, get into that juicy S3 API source code… and let your creativity loose!
I’d like to warmly welcome you to the hub for Scality S3 Server, the open source version of our Amazon S3 API.
Scality S3 Server fulfills our long-held dream of creating a true open source project. For Scality, Amazon S3 was the natural choice because it has become the de facto standard for object storage. We believe the best way for developers to learn the technology is to explore the source code and see how things are put together. That’s why we’re seeding the open source community. Once people sink their teeth into Scality S3 Server and start playing with the source code, we’re confident that creative ideas and innovations will emerge beyond what we imagined. Such active community involvement is the magic of open source—and we’ve committed to the vision by equipping Scality S3 Server with its own distinctive brand, logo, and hub.
Giorgio Regni (CTO) and Jérôme Lecat (CEO).
We’ll keep the API current as Amazon makes periodic protocol changes. And because we’re open source, you’re free to tweak code on your own.
One key point we would like emphasize: Scality S3 Server is by no means a stripped-down version of something else. It’s the same production-grade code as our commercial product, with no subtractions or compromises. With Scality S3 Server, you can go comfortably from your first line of code to a real production experience with object storage. Develop and test on your laptop, deploy your finished app on a larger server configured with RAID protection, and use it to store production data. Start small and scale up to as much as a few hundred terabytes.
So go ahead—take Scality S3 Server for a spin! Then let us know what you think, and don’t be shy. We look forward to your feedback.