Deploy Zenko on your laptop with Minikube

Deploy Zenko on your laptop with Minikube

Since Kubernetes is Zenko’s default choice for deployment, Minikube is the quickest way to develop Zenko locally. Minikube is described in the official docs as

[…] a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

To install Minikube you’ll need Virtualbox (or another virtualization option) and follow instructions for your operating system. The video tutorial contains an installation walk-through.

Once Minikube, kubectl, and helm are installed, start Minikube with Kubernetes version 1.9 or newer, and preferably at least 4GB of RAM.  Then enable the Minikube ingress addon for communication.

$ minikube start --kubernetes-version=v1.9.0 --memory 4096
$ minikube addons enable ingress

Once Minikube started, run the helm initialization.

$ helm init --wait

With K8s now running, clone Zenko repository and go into the Helm charts directory to retrieve all dependencies:

$ git clone https://github.com/scality/Zenko.git
$ cd ./Zenko/charts
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
"incubator" has been added to your repositories

$ helm dependency build zenko/
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 8 charts
Downloading prometheus from repo https://kubernetes-charts.storage.googleapis.com/
Downloading mongodb-replicaset from repo https://kubernetes-charts.storage.googleapis.com/
Downloading redis from repo https://kubernetes-charts.storage.googleapis.com/
Downloading kafka from repo http://storage.googleapis.com/kubernetes-charts-incubator
Downloading zookeeper from repo http://storage.googleapis.com/kubernetes-charts-incubator
Deleting outdated charts

With your dependencies built, you can run the following shell command to deploy a single node Zenko stack with Orbit enabled.

$ helm install --name zenko \
--set prometheus.rbac.create=false \
--set zenko-queue.rbac.enabled=false \
--set redis-ha.rbac.create=false \
-f single-node-values.yml zenko

To view the K8s dashboard type the following and will launch the dashboard in your default browser:

$ minikube dashboard

The endpoint can now be accessed via the K8s cluster ip (run minikube ip to display the cluster ip). Now you have a running Zenko instance in a mini-kubernetes cluster. To connect your instance to Orbit, find the instance ID

$ kubectl logs $(kubectl get pods --no-headers=true -o \
custom-columns=:metadata.name | grep cloudserver-front) | \
grep Instance | tail -n 1

The output will look something like this:

{"name":"S3","time":1529101607249,"req_id":"9089628bad40b9a255fd","level":"info","message":"this deployment's Instance ID is 6075357a-b08d-419e-9af8-cc9f391ca8e2","hostname":"zenko-cloudserver-front-f74d8c48c-dt6fc","pid":23}

The Instance ID in this case is 6075357a-b08d-419e-9af8-cc9f391ca8e2. Login into Orbit and register this instance. To test your minikube deployment, assign a hostname to the clusters ingress IP address to make things easier, then using s3cmd test

$ cat /etc/hosts
127.0.0.1   localhost
127.0.1.1   machine
192.168.99.100 minikube

By default, minikube only exposes SSL port 443, so you’ll want to ask your client/app to use SSL. However, since minikube uses a self-signed certificate, you may get security error. You can either configure minikube to use a trusted certificate, or simply ignore the certificate.

$ cat ~/.s3cfg
[default]
access_key = OZN3QAFKIS7K90YTLYP4
secret_key = 1Tix1ZbvzDtckDZLlSAr4+4AOzxRGsOtQduV297p
host_base = minikube
host_bucket = %(bucket).minikube
signature_v2 = False
use_https = True

$ s3cmd ls  --no-check-certificate
$

Head to Zenko forum to ask questions!

How to replicate data across clouds with Zenko Orbit

How to replicate data across clouds with Zenko Orbit

The video below shows two demos of Zenko solving real-life issues faced by developers that need to replicate data across multiple clouds. Developers need to support multiple storage options for their applications and dealing with the complexity of multiple APIs is hard. Even without writing applications with multi-cloud support, the egress costs of transferring large amounts of data across clouds can force choice, reducing options. Zenko is designed to empower developers giving them freedom to choose the best storage solutions for their application while keeping control of where data is stored.

The video below shows two demos with real-life use cases. The first use case is that of a developer who prefers to use Amazon Glacier as archive of choice and wants to use Google Cloud machine learning service. Some of the data needs to be available in Google Cloud for faster processing but Glacier is slow and egress costs can be expensive. Zenko lets easily manage this multiple cloud scenario with a single policy. In addition, it lets developers pick the most cost-effective combination of storage options: without Zenko, if data is stored in AWS but analyzed in Google, you would incur in expensive charges while moving data out of Amazon.

The second demo shows the ability of Zenko to replicate data across three regions in AWS. One known limitation of S3 is that replication is limited to two regions. Replicating data to more than two regions allows to increase data resiliency and security. For example, companies that need a point of presence in Asia, Europe and US West can keep data closer to the point of consumption. Similar challenges for companies that collect data and need to comply with data sovereignty regulations like GDPR. Zenko’s replication augments AWS’s basic capability.

Enjoy the demos and try them out for free on Zenko Orbit.

How to find keys and account info for AWS, Azure and Google

How to find keys and account info for AWS, Azure and Google

When configuring storage locations in Zenko Orbit, you need to enter some combination of access key, secret key, and account name. All this information varies by cloud provider and it can be annoyingly complicated to find all that information. This cheatsheet will help  you configure access to AWS, Azure and Google for Zenko Orbit.

This document assumes you know how to log into the AWS, Azure and Google cloud portals and that you know how to create a storage location in Zenko Orbit.

AWS

Location name = any descriptive name; “aws-location” “aws-demo” etc.

AWS Access Key and AWS Secret Key: https://console.aws.amazon.com/iam/home?#/security_credential

  • You may or may not have IAM roles set up in your account. If not, press the Continue to Security Credentials button
  • Press the “+” sign next to “Access keys (access key ID and secret access key)”
  • Prease the Create New Access Key Button
  • Window will appear with your Access and Secret Key (screenshot below)
  • Copy/paste your Secret Key somewhere safe.

Target Bucket Name = name of existing Bucket in Amazon S3

Azure

Location name = any descriptive name; “azure-location” “azure-demo” etc.

Azure Storage Endpoint = the “Blob Storage Endpoint” located on the top right of the Overview tab for your storage account (screenshot below).

Azure Account Name = the name of your Azure storage account located on the top of the Azure Portal (screenshot below – “scalitydemo” is Azure Account Name).

Azure Access Key = the “key1 / Key” visible when you select Access Key in the Azure Portal.

Target Bucket Name = name of existing Container in Azure

Google

Location name = any descriptive name; “gcs-location” “gcs-demo” etc.

GCP Access Key and Secret Key = navigate to GCP Console / Storage / Settings / Interoperability Tab (see screenshot below

Target Bucket Name = name of existing Bucket in Google Cloud Storage

Target Helper Bucket Name for Multi-part Uploads = name of existing Bucket in Google Cloud Storage

(Google Cloud Storage handles MPU in such a way that Zenko requires a second bucket for temporary staging purposes)

Richard Payette

How to Replicate Data Between Digital Ocean Spaces and Amazon S3 buckets

How to Replicate Data Between Digital Ocean Spaces and Amazon S3 buckets

Zenko’s multi-cloud capabilities keep expanding with the addition of Digital Ocean Spaces as a new endpoint within the Orbit management panel. Digital Ocean is a cloud hosting company that has experienced explosive growth since 2013 with a very simple to use product. Spaces is the latest addition to the popular Digital Ocean cloud offering: a simple object storage compatible with Amazon S3 API.

The video below demonstrates how to set up data replication between an Amazon S3 bucket and a Digital Ocean Space. One might want to setup such replication to keep multiple copies of a backup file for example, or to increase resilience of an application in those rare cases when a cloud is not available. Another typical use case is to optimize costs, using the best features of many clouds while keeping costs under control.

The newly released version of Zenko Orbit lets you seamlessly replicate objects between Amazon S3, Digital Ocean, Google Cloud, Azure, Wasabi, Scality RING and local storage. You can test the replication capabilities easily by creating an account on Orbit and then connecting it to your accounts on Amazon AWS and Digital Ocean Space. Watch the video for more details. If you have questions don’t hesitate to ask on Zenko forums.

How to set up a Docker Swarm cluster

How to set up a Docker Swarm cluster

One of the first recommended ways to test Zenko is to deploy it in a  Docker Swarm cluster. Docker Swarm allows to make a group of servers part of a cluster that will give you fault tolerance. Deploying, and that provides Zenko with a high-availability architecture.

Since Zenko deployment documentation mentions a functioning Docker Swarm cluster as one of the prerequisites, we recorded a short video to illustrate how to create a cluster with 2 worker nodes and 3 manager nodes. We do this every day, we tend to forget that we had to learn it too, at some point in (recent) time. Enjoy the video and reach out on our forum if you’re having any difficulty deploying Zenko: we’re here to help!

How to Cloud Data Mirroring from Microsoft Azure to AWS S3

How to Cloud Data Mirroring from Microsoft Azure to AWS S3

Maz from our team at Scality has been working on a simple guide explaining how to use Zenko and Orbit to replicate data between Microsoft Azure Blob storage and Amazon S3.

This is a very important milestone for us as it shows how easy it is to just create an account and login into the Zenko management portal, create a Zenko sandbox and start replicating data between 2 completely different public clouds replication wizard, no command line required. – Giorgio Regni

Why is this news worthy?

It is all about data durability and availability!

Replicating your data across different providers is a great way to increase its protection and guarantee that your data will always be available, even in the case of a catastrophic failure:

  • In terms of durability, we now have two independent services each of which has a durability of eleven 9’s. By storing data across both clouds, we can increase our data durability to “22 9’s” that makes a data loss event a statistically negligible probability.
  • We can also take advantage of immutability through object versioning in one or more of the cloud services, for even greater protection. We have also gained disaster recovery (D/R) protection, meaning the data is protected in the event of a total site disaster or loss.
  • In terms of data availability, what are the chances that two cloud regions in one service (for example, AWS US East and AWS US West) are unavailable at the same time? Stretching this further, what are the chances that two INDEPENDENT cloud services such as AWS S3 and Azure Blob Storage are unavailable at the same time?

Download the ebook or read it here: