by Paul Speciale | Jul 14, 2017 | News
Zenko Multi-Cloud Data Controller
Our Zenko Multi-Cloud Data Controller was launched into the open source on July 11, 2017, you can read full the press release here. In a nutshell, it’s a new solution for managing data both in public cloud services and also in local storage.
With Zenko under an Apache 2.0 open source license, our goal is for developers to freely use the unified S3 API and cloud storage capabilities in new applications. This means its free for use and distribution in your enterprise and embedded apps, edge devices and any “next great thing” you can think of. Zenko provides your apps with access to the AWS S3 public cloud (supported now with the launch product), and later we’ll support Microsoft Azure Blob Storage and Google Cloud Storage too (rollout details are below). The purpose is to make it easy as possible for your apps to access any cloud, even those that do not natively support the AWS S3 API. Right now, Zenko can store Bucket data locally on your machine in Docker Volumes, optionally in-memory (useful for fast transient processing such or for testing) and in our Scality RING object store for on-premises and “private cloud” style storage.
The first release of Zenko is based on our previously launched open source S3 Server Docker instance, and uses Docker Swarm to manage deployment and orchestrate HA/failover across S3 Server containers. This is documented here on the Zenko.io website, and we also are working to provide more docs on configuring public cloud integration with AWS now, and the others soon.
Looking at the bigger picture, we’ll also enhance Zenko in the coming months with some new features that will make it even more capable. This includes a new open source policy-based data management engine called Backbeat, and a metadata search engine called Clueso. Backbeat is all about enabling movement and mobility of data from on-premises Buckets to cloud Buckets through asynchronous replication. Later this year we’ll also provide Lifecycle management for auto-expiration and transitioning (tiering) objects to the cloud. Clueso lets you search across clouds using the S3 metadata attributes you can already store with your objects.
To help you plan your roadmaps with these new features in mind, here is the rollout plan for these new capabilities in the Zenko open source.
Zenko Rollout Plan
Our philosophy is to offer the features early to provide access to them as soon as possible, to get your feedback, comments and contributions on them as a community project. With that background, Zenko and its features will rollout as follows.
Zenko open source features supported at July Launch
- Unified S3 API
- HA/failover across two S3 containers managed by Docker Swarm
- AWS v4 & v2 authentication (with access keys stored in a credentials file)
- Bucket location control for object data storage in:
- Local storage / Docker volumes
- In-memory (fast transient processing)
- Scality RING
- AWS S3 (any S3 region endpoint)
In the late July 2017 time frame we will publish the following new capabilities:
- Bucket location control and data storage in Microsoft Azure Blob Storage
- Backbeat for Zenko to Zenko Cross-Region Replication (CRR) with local storage
In the September 2017 time frame we are targeting to deliver:
- Clueso engine for federated searches on S3 metadata attributes (independent of data location)
- Bucket Lifecycle for object expiration
- Backbeat for Zenko replication to the AWS S3 cloud (CRR)
And by the end of 2017:
- Backbeat for Zenko replication to Microsoft Azure Blob Storage replication (CRR)
- Bucket Lifecycle for tiering to AWS S3
If you don’t see what you need, let us know what other cool features we should plan for Zenko!
GitHub is the best place for contributions and user comments and questions, thank you!
by Tyna Callahan | Oct 31, 2016 | News
As you read this, thousands of developers all over the world, inside enterprises big and small, are digging into our Amazon S3 API all the way down to the source code. We can’t wait to see what they create! It’s only a matter of time, and we’re nudging things forward. Find out for yourself what everyone’s raving about: get hands-on with S3 Server now.
Why the world is coming to open source
From the smallest of seedlings, open source has grown into THE innovation engine for IT. It’s no mystery why: lower development and operational costs, faster time to market, freedom from proprietary vendor lock-ins, high-quality solutions… and thanks to source code accessibility, the freedom to customize and fix at will.
According to the annual 2016 “Future of Open Source” study by North Bridge and Black Duck, “Open source improves efficiency, interoperability, and innovation.” More than 65% of companies now use it for application development, and over 55% for production infrastructure.
We chose to make S3 Server open source for all these reasons, plus one more: open source has also become the fast path for sprouting and spreading hot new technology. Think Docker, Hadoop and their ecosystems, as well as NoSQL and NewSQL databases, just to take a few examples.
Not unexpectedly, open source is converging with another transformational IT phenomenon: the cloud. To paraphrase InfoWorld, if software is eating the world, the cloud is eating open source applications. Many winners of their 2015 Best of Open Source Software Awards (the “Bossies”) have a SaaS or hosted option.
The convergence is real: open source has definitely been folded into the great cloud migration. Still, there’s more than one way to get there. The public cloud, as pervasive as it may be, is not the best choice for every organization, every use case, every time. There are other paths to the same cost savings, dynamic scalability, and management ease. Which leads directly to the question:
Should we deploy on a public cloud or a private one?
Our emphatic answer is: Yes! That is, choose whichever model is best for your needs and each use case. Either way, your S3 Server-built applications have you covered. They will run impeccably on either kind of cloud, without your having to change a single line of code. How’s that for flexibility?
Dev and test with S3 Server. Then deploy on AWS, or alternatively, in your own data center using the Scality RING with industry-standard x86 hardware you’ve already invested in. Or go with a single-tenant private cloud hosted by a provider.
Any which way, you’ll gain immense scalability, vast TCO savings over traditional storage, and another advantage that’s truly unique: full application portability without having to rewrite code.
Born free
Open source is not an afterthought for S3 Server. It’s part of the very fabric. Open source is dear to our hearts because it’s the locus of originality, excitement, and innovation in IT today. We love the magic that happens when talented developers put their energy, ideas, and visions together to build something new. In fact, S3 Server was created from the work product of a gathering of hackers. Some of whom impressed us so much that we hired them.
Not a bad outcome for our very first hackathon! So we’ve just done it again:
Innovation on Display at Holberton Hackathon in San Francisco
On October 21-23, Scality brought together 6 teams of ambitious and talented developers to collaboratively build some great tools with Scality S3 Server and an array of Kinetic drives from our partner Seagate.
Participants included 2 UC Santa Cruz grad students, an exchange student from Australia, a UC Berkeley junior who’s won 5 hackathons so far this year, several first-time hackers from the Holberton School… and a coder nicknamed “Nacho” from the original Kinetic development team at Seagate.
First prize went to the intrepid “Team 42,” a youthful posse of French developers newly relocated to Silicon Valley to open the first U.S. office of Paris-based 42. They took top honors for their ingenious S3 Server-based collaboration tool.
Learn more about the Hackathon teams and their projects here.
About S3 ServerS3 Server on GitHub
Docker Toolbox
S3 Server on Docker
by Giorgio Regni | Jul 26, 2016 | News
We’ve recently debuted Scality S3 Server, our new S3 API implementation — the same as we use for our RING storage platform, open sourced and conveniently packaged in a lightweight Docker container.
You can download Scality S3 Server in less than 3 minutes and run it right on your laptop using local storage. That makes it the fastest and easiest way for developers to break new ground with object storage.
To give you an even faster start, we’ll be part of another ground-breaking innovation: the Docker Store, a new marketplace for validated “dockerized” software. We’re proud to announce that Docker has chosen Scality S3 Server as one of its first featured apps!
What makes the Docker Store so compelling is that it will offer developers and enterprises the same simplicity and confidence that consumers enjoy with Apple’s or Android’s app stores—namely, fast and easy downloading of vetted, validated, and user-rated software.
Thanks to Docker’s popularity and large audience base, we expect our presence on the Docker Store to attract a lot of interest and plenty of S3 Server downloads. As production-grade code, it offers something you can’t get anywhere else: a complete develop-to-deploy experience with object storage. You can build and test applications on your laptop or local server, and then deploy them at Big Data scale—on premises or on Amazon Web Services—without changing a single line of code.
Industry commentators are singing the praises of Scality S3 Server, and we think you will too. We know you’re going to do great things with it, and we can’t wait to see them. So go ahead: download Scality S3 Server, get into that juicy S3 API source code… and let your creativity loose!
To learn more about the Docker Marketplace, check out this TechCrunch article. In the interim period before Docker Marketplace is publicly launched, you can download Scality S3 Server via DockerHub today and follow this simple tutorial to get started or get the source code @ GitHub.
Giorgio Regni
Chief Technology Officer
@GiorgioRegni