Embassy Cloud Version 3 (legacy)

Note

This is now legacy and is replaced with Embassy v4. Details included in the new Embassy v4 documentation

Quick Start Guide

In this tutorial we’ll create an instance (the OpenStack word for a VM) from a standard image, and set it up for shell access from your computer.

How to retrieve your credentials

We deliver an e-mail to you explainig how to retrieve your username and initial password at the time we create the tenancy.

In this e-mail, you are given two wrapping tokens which allow you 2 attempts to retrieve the credentials. Please, be aware tokens provided only last for 96 hours and a wrapping token can only be used once. In case it is needed, we can create new tokens for you.

There are two retrieval methods:

  • A bash script
  • A Web Access interface (Hashi Corp Vault)

Script Access

A script that runs on both Mac and Linux that requires a user token, one of your two wrapping tokens, and a retrieval path.

bash <(curl -sSL https://gitlab.ebi.ac.uk/vac/embassy-user-scripts/raw/master/get-password )

You will find the original repository on Gitlab

Web Access

Log in to Embassy OpenStack Dashboard

You should have been sent your login details already. Check the URL sent. It should be one of the defined in the table below, and you can login using any web browser.

Embassy Cloud Endpoints
Cloud URL
Embassy Cloud 5 https://extcloud05.ebi.ac.uk/
Embassy Cloud 6 https://extcloud06.ebi.ac.uk/
Embassy Cloud 7 https://cloud7.embassy.ebi.ac.uk/

Once you’ve logged in to the Embassy OpenStack interface, you see an overview page, which summarises your tenancy’s current usage ( see OpenStack Overview ).

OpenStack Overview

OpenStack Overview

Using the OpenStack CLI

You can also use the OpenStack CLI tool to interact directly with your project. You only need an rc credentials file. Please, to download your credentials file, login to the OpenStack dashboard and click on Access & Security -> API Access - Download OpenStack RC File v3. An example how to use the OpenStack CLI follows:

# Create a virtual environment
virtualenv ~/.venv-openstack

# Install the OpenStack CLI
source ~/.venv-openstack/bin/activate
pip install python-openstackclient

# Test: source credentials file and get a list of networks in this project
source ~/embassy/vac-jnavarro-development.rc
openstack network list
+--------------------------------------+---------------+--------------------------------------+
| ID                                   | Name          | Subnets                              |
+--------------------------------------+---------------+--------------------------------------+
| e25c3173-bb5c-4bbc-83a7-f0551099c8cd | ext-net-36    | 3c926da4-b320-4320-8d62-f70e2078a2fd |
| 2d771d9c-f279-498f-8b8a-f5c6d83da6e8 | ext-net       | b5c8ea12-6729-495c-9cfd-8a56557a8bff |
| 7421d53d-6467-4f29-9d4f-e96e8c85ecd8 | ext-net-31    | 69868395-d808-4e48-a10a-79854258aa1e |

[..]

Create an SSH keypair

For this test, we’ll use SSH keypair based authentication, rather than password authentication. This is the most common way of authenticating in OpenStack, at least when getting started. In order to create a new SSH Key Pair, click on Access & Security, then on Key Pairs ( see figure below ).

Key Pairs

Access & Security - Key Pair

Now you can choose whether to upload an existing public key or create a new Key Pair. Let’s choose to create a new Key Pair, so click on the button marked Create Key Pair. Choose a name for this Key Pair, and click on Create.

Create a new Key Pairs

Access & Security - Create a new Key Pair

Your browser should now download the private part of the keypair to your computer - make sure you save this, as you’ll need it when you SSH into the instance we’ll create later.

Note

You should keep this Private Key secure and with restricted access.

I’m using Linux on my desktop, so to restrict access to this private key I’ll run the following command:

# Only User should have access
chmod 600 keypair.pem

Create a Security Group

Next, we’ll create a new security group which will allow SSH access to your instance. In OpenStack, security groups control network access to instances; the default security group allows all outbound access (so instances can access the internet), and all access between instances within the same security group, but no incoming access. This keeps the instances secure but means you can’t contact them from outside the Embassy cloud.

We’ll create a security group that allows inbound ICMP (ping) and SSH access - this will allow you to perform network diagnosis (ping tests) and connect to your instance in a secure shell. Click on Access & Security, then on Security Groups.

Security Groups

Access & Security - Security Groups

Click on Create Security Group. Name it something like ‘SSH-and-ping’, with the description ‘allows inbound SSH and ICMP access’. To the right of your newly-created security group is a drop-down box of actions.

New Security Group

Access & Security - New Security Group

The default action in it is Manage Rules, so click on this. Now you can see the rules that currently exist for this security group. These rules allow all outbound traffic. Click on the Add Rule button.

Security Group Rule

Access & Security - Security Group Rule

The box that appears gives you much control over access rules; we’re simply going to use a couple of pre-defined rules to allow the access we need. Click on the Rule box, where it says Custom TCP Rule, and select SSH. Click on Add. Click on the Rule box again, and this time scroll down the list to Custom TCP Rule and select All ICMP. Click on Add again. These two rules will allow the inbound access we require.

Warning

In production, you will probably want to restrict SSH access to a range of IPs that you will use to access the instances.

Note

Restrict access using the right Security Group rules will give your instances more protection against attack. It’s important to remember that you are responsible for the security of your instances, and the Internet is a dangerous place.

Public Images

An OpenStack Compute cloud is not very useful unless you have Virtual Machine images (or Virtual Appliances).

What is a virtual machine image?

A virtual machine image is a single file which contains a virtual disk that has a bootable operating system installed on it.

The VAC team provides basic images only for testing purposes. That implies they could be not up-to-dated versions and non-production ready (i.e. you need to solve security issues). Please note that these images can be updated without notice.

Tenants must deploy and manage their images to get total control of their application and pipeline dependencies. The simplest way to obtain a virtual machine image that works with OpenStack is to download one that someone else has already created. Check this URL to find public available images. You can upload your own images to the Embassy cloud by clicking on the Create Image button.

Compute -> Images - Public images

Compute -> Images - Public images

To launch a new instance, go to the Compute main section. Click on the Images tab (Figure Compute -> Images - Public images). There you will find a selection of publicly-available images that EBI has provided to get you started. For this tutorial, we’ll use the cirros image. This is a minimal OS designed to be used for cloud environments, and is very useful for testing. Find the cirros image in the list, and click on the Launch button in its drop-down actions box.

Creating an instance

The dialog that appears is the Launch Instance dialog. You’ll need to fill in some information in each of its tabs to proceed.

Details

In Availability Zone, please select the AZ that your tenant should use - you should have been told this when it was created. Usually it is nova, but if you have options like AZ_1 or AZ_2, please select AZ_1.

In Instance Name, enter an appropriate name for this instance (or group of instances). I’ll enter ‘test’ here.

Number of Instances (count) - you can launch multiple instances at once. Each will get a unique name based on the name you enter, but all other aspects will be as defined in this dialog. For now, I’ll just launch 1 test instance.

Launch a test image - Details

Launch a test image - Details

Source

Instance Boot Source: Here, you decide what the source for this instance’s OS image will be. Since we’ve got here by clicking on Launch in the cirros image, this is pre-populated with Boot from image. Leave this as it is.

Launch a test image - Source

Launch a test image - Source

Flavor

The Flavour box allows you to choose the hardware flavour for this instance (or group of instances). This defines the resources that will be consumed by this instance - cores, RAM, disk space etc. As a rule, keep the instance as small as possible, so you don’t run out of quota too soon. Please, choose s1.tiny for this test.

Launch a test image - Flavor

Launch a test image - Flavor

Networks

We have already created a private network and added a router, so that network has internet access. Since there is only one network available to your tenancy at the moment, this should have been pre-selected as the network for your instance. More advanced configurations include having multiple networks per instance, but we’ll leave this with the default network selected.

Key Pair

In general,images contain the cloud-init package to support the SSH key pair and user data injection. Because many of the images disable SSH password authentication by default, boot the image with an injected key pair. You can SSH into the instance with the private key and default login account (e.g. ubuntu or centos).

Here you can select which Key Pair the instance will use. In case you only have one key pair, it should be pre-selected here.

Launch a test image - Key Pair

Launch a test image - Key Pair

Security Groups

Here is where you select which security groups your instance will be a member of. You should deselect default, and select SSH-and-ping instead.

Launch a test image - Security Groups

Launch a test image - Security Groups

You still can apply more customizations when launching an image (e.g. metadata, volumes, etc ). For the purpose of this guide, we don’t need to fill anything else, so just click on Launch. It will take a few seconds for your instance to start, but you should soon see it as Running.

Associating a Floating IP

When you deploy a new instance, a fixed IP is given by default, but it’s a private IP, so your instance couldn’t be reachable from the outside world. OpenStack introduces a special pool of IP addresses, called floating IPs, which are just publicly routable IPs. Floating IPs are not allocated to instances by default. EBI users need to explicitly allocate them from the available pools and then attach them to their instances, thus making it reachable form the outside world.

The process to do that is pretty straightforward. Click on the actions box next to your new test instance and select Associate Floating IP.

Instance *actions* dropbox

Instance actions dropbox

In the box that appears, it shows that your tenant does not yet have any floating IPs assigned; you’ll need to request one. To do this, click on the little + button.

Request a new Floating IP

Request a new Floating IP

You will be asked which network you want this floating IP address to be connected to (e.g. ext-net-36, ext-net-37, etc) depending on where is your router attached to. Now, click on Allocate IP.

Allocate a Floating IP

Allocate a Floating IP

The IP address that has been selected for you will now be shown. Also, the port that the IP address will be assigned to - don’t worry about this now, just click on Associate.

How to associate a Floating I

How to associate a Floating IP

The list of instances will be updated to show that this instance now has two IP addresses associated - its private IP (typically 192.168.x.x) and the new floating IP (see :ref: fig_instances_test below).

SSH to the new instance

You’re now ready to use SSH to connect to the new instance. Depending on the operating system of your computer, the precise method for doing this will vary. For this guide, we’ll use a Linux OS, so as it’s the most straightforward.

In any case, you’ll be using the new SSH Key Pair to do the authentication, not a password. So, locate the file you downloaded (this is the private part of the pair). You’ll need to know the username to use, which can be different for different images - in the case of the cirros image, the username is cirros. You’ll also need to know the floating IP address to connect to, of course.

Test instance up & running

Test instance up & running

Armed with all this information, my command is this:

ssh -i ~/.ssh/id_rsa -l cirros 193.62.54.143
cirros@193.62.54.143:~#

You will probably be asked to confirm that you want to trust this connection, and then you should be logged in without needing to type a password. This brings this tutorial to a close. You can use these skills to create multiple instances, create services on them, and so forth.

Good luck and have fun!!!

Security Best Practices

The Shared Model

EMBL-EBI’s Embassy cloud provides collaborators with a secure virtual infrastructure located close to the EMBl-EBI’s public data resources. In the Embassy model systems administration & security of the cloud is a shared responsibility:

Shared Model

Infrastructure Security

This section explains both Service Provider security measures (EBI’s implementation of the Red Hat OpenStack Platform) and project (tenant) security.

Service Provider

We are using Red Hat OpenStack Platform which is an enterprise implementation of Openstack with built in security.

We have based our IaaS security model on this document

In summary -

  1. SELinux is configured to help protect against the bridging security domains with API access
  2. Customized controller firewall rules are in place to restrict external access to API ports only.
  3. DMZ creation for the OSP installation gives logical separation from EBI’s internal infrastructure.
  4. Rate limiting configured in haproxy to help prevent any denial of Service attacks
  5. API endpoints are secured with SSL/TLS
  6. Keystone tokens are time limited and expire

Tenant

Tenant (Project) security is managed with a combination of strict EBI policy and tools made available to tenants like Security Groups and a requirement to enable firewall protection on internet accessible instances.

Tenants are advised to deploy a firewall in addition to Security Groups. This can be on a per instance basis (iptables), firewall appliance (e.g. PFSense), or bastion node.

Security Groups

Security groups control network access to instances; the default security group allows all outbound access (so instances can access the internet), and all access between instances within the same security group, but no incoming access.

This keeps the instances secure but means you can’t contact them from outside the Embassy cloud. You will need to create a new security group that allows SSH access - this will allow you to connect to your instance via secure shell.

Steps for creating your new Security Group in the Openstack Dashboard:

  • Select Project > Network > Security Groups, then on Security Groups.
  • Click on Create Security Group. Give it a name e.g: SSH-and-ping and a description.
  • To the right of your newly created Security Group, click on Manage Rules.
  • Click on the Add Rule button.
    • Click on the Rule box, where it says Custom TCP Rule, select your protocol (e.g. SSH)
    • Choose the IP range you would like to allow, and write it in the CIDR field.
    • Click on Add.

After these steps, inbound access will be allowed and your are now able to see the rules that currently exist for this security group, including the new one you created.

Warning

Choosing CIDR: 0.0.0.0/0 allows everyone in the Internet to reach your ssh port. We recommend to restrict SSH access to a range of IPs that you will use to access the instance. This will give you another layer of protection against attack. It’s important to remember that you are responsible for the security of your instances, and the internet is a fairly dangerous place.

Protect your VMs

Here you are some tips for making your deployments more resilient in Openstack:

  1. For critical instances, use Cinder for storing the OS and data volumes. This is persistent and has the added advantage of 7 daily snapshots.
  2. If you require high availability you will need more than one instance. Make sure they are running in different host by using affinity rules.
  3. Use our Embassy Cloud S3 Object Store (native Amazon S3 compatible) for your backup jobs.
  4. Important to stress not to rely on snapshots as a backup, these are just a convenience tool and are not application aware. You should be able to programmatically redeploy all instances using the API (Heat or similar), as this is the Cloud model. Do not do manual unscripted instance deployment/configuration.

Storage

NFS

For projects that require access to our shared storage, please follow this 5 steps so your VM(s) can have access via NFS. Before proceeding, please make sure the instance you are trying to connect to the shared storage has been deployed with an interface in the data network (net_provider_XXXX) and this network is the second on your instance’s networks list. The net_provider ID is provided by the Cloud Team after your request. The following examples apply to CentOS only.

1.- Install package dnsmasq if it is not present in the system already. Activate the second interface to automatically get an IP on the net_provider_XXXX network.

1
2
$ yum install dnsmasq
$ dhclient eth1

2.- Create/modify your /etc/dnsmasq.conf file so it looks like this (where XXXX is the number associated to your net_provider_XXXX network):

/etc/dnsmasq.conf
1
2
3
4
5
# Route all shared storage queries to the service IP
server=/em-isi-XXXX.ebi.ac.uk/<Service IP provided by the Cloud Team>

# Route the rest of the queries to the Internet (or your preferred DNS server)
server=8.8.8.8

Note: Please request the connection details to the Cloud Team at EBI (embassycloud@ebi.ac.uk).

3.- Change your /etc/resolv.conf so you use the dnsmasq service from now on:

1
2
/etc/resolv.conf
nameserver 127.0.0.1

Note: Bear in mind that we normally configure the private subnets to assign the Google dns servers in the DHCP configuration, so you may have to overwrite this dns configuration in the network settings within your Openstack tenancy.

4.- Start and enable the dnsmasq service:

1
2
$ systemctl enable dnsmasq.service
$ systemctl restart dnsmasq.service

5.- Check the connectivity and test mount the NFS:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
$ ping em-isi-XXXX.ebi.ac.uk
64 bytes from 10.10.10.10 (10.10.10.10): icmp_seq=1 ttl=64 time=0.437 ms
...
(Ctrl-C)

$ ping em-isi-XXXX.ebi.ac.uk
64 bytes from 10.10.10.11 (10.10.10.11): icmp_seq=1 ttl=64 time=0.437 ms
...
(Ctrl-C)

# Note: Make sure the ip returned by the 'ping' command changes in every
# execution and also test external name resolution to ensure that dnsmasq is
# working properly:

$ ping www.google.com
64 bytes from lhr35s01-in-f4.1e100.net (172.217.23.4): icmp_seq=1 ttl=64 time=4.607 ms
...
(Ctrl-C)

$ mount -t nfs em-isi-XXXX.ebi.ac.uk:/ifs/yourShareName /mnt

S3 Object Store

Embassy users can request access to our Object Stores, we have an S3 compatible Object Storage backend. Some of the typical Object store use cases are:

  • No requirement for a POSIX filesystem
  • Large datasets
  • Unstructured data
  • Backups
  • Archiving

If you would like to use our Object Stores or simply explore the technology, please send us an email to embassycloud@ebi.ac.uk and we will create an environment for you.

Note

  • The bucket creation feature is not available to users, if you require additional buckets please let us know.
  • Our s3 compatible Object Store is not backed up. Please make sure you don’t use it as your only backup target.

S3

In order to use our s3 compatible object store, you can download the AWS Command Line Interface (awscli) from https://aws.amazon.com/cli/. Alternatively you can also use https://github.com/s3tools/s3cmd, which is another interface written in python.

Examples of use:

$ export AWS_ACCESS_KEY_ID=yourAccessKeyId
$ export AWS_SECRET_ACCESS_KEY=yourSecretAccessKey

(or)

$ aws configure
#(and follow the steps)

# Create a bucket
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 mb s3://test

# List buckets
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 ls
2021-01-29 15:35:21 test

# Upload file to bucket
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 cp helloworld.txt s3://testbucket/
upload: helloworld.txt

# List files within bucket
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 ls s3://testbucket/
2021-01-29 15:36:51 13 helloworld.txt

# Upload directory including only jpgs and txts
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 cp /tmp/foo/ s3://testbucket/ --recursive --exclude "*" --include "*.jpg" --include "*.txt"

# Generate a temporary url for users to download a given object - default expiration time 3600s
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 presign s3://testbucket/myobject
https://s3.embassy.ebi.ac.uk/testbucket/myobject?AWSAccessKeyId=ozB4pHyzrPUjXo1fw57&Signature=pG3xRpKyTuxQq8xatRUusJ6oE%3D&Expires=1526574462

# Obtain the used space and number of objects (result displayed in bytes)
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3api list-objects --bucket testbucket --output json --query "[sum(Contents[].Size), length(Contents[])]"

# Delete a bucket (use --force if it's not empty)
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 rb --force s3://test

S3 Java SDK

If your Java application needs to interact with our s3 compatible Object Store you have two options:

  • Amazon Java SDK: if your application requires portability to Amazon.
  • IBM Cloud Object Storage Java SDK: If you would like to use all the features from our Object Store, use the SDK directly provided by the vendor. You can find examples and code repositories in the link name.

Example code for Amazon Java SDK:

import com.amazonaws.regions.Region;

import com.amazonaws.auth.EnvironmentVariableCredentialsProvider;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration;

import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;

import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectSummary;

import java.util.List;

public class EBIaws {

  public static void main( String[] args ) {

  final AmazonS3 s3 = AmazonS3ClientBuilder.standard()
            .withEndpointConfiguration(new EndpointConfiguration("https://s3.embassy.ebi.ac.uk/mybucket", "eu-west-2"))
    .withCredentials(new EnvironmentVariableCredentialsProvider())
    .withPathStyleAccessEnabled(true)
    .build();

  System.out.println("Listing objects");
          ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
                .withBucketName(""));
          for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
            System.out.println(" - " + objectSummary.getKey() + "  " +
                               "(size = " + objectSummary.getSize() + ")");
          }
          System.out.println();
      }
}

Virtual Gluster

What is Gluster?

Gluster is a distributed scale out filesystem that allows rapid provisioning of additional storage based on your storage consumption needs. It incorporates automatic failover as a primary feature. All of this is accomplished without a centralized metadata server. For more information please visit the official gluster documentation.

Automating a Gluster Cluster Deployment

Openstack uses Heat API for orchestration which automates provisioning. We have created a Heat template which can deploy a gluster cluster for you with one simple command.

If you are interested in using this template then please contact embassycloud@ebi.ac.uk .

Example deployment

This is a distributed two replica Gluster deployment (similar to RAID 10).

Shared Model

The Gluster deployment is composed of ten s1.jumbo nodes (including the master), and we use an openstack flavour to define the total Gluster disk space. The disk will all be local to the compute nodes (Ephemeral).

By simply editing gluster-environment.yaml we can change these parameters -

$ cat gluster-environment.yaml
parameter_defaults:
key_name: cems
instance_count: 9
instance_type: s1.jumbo
local_network: f3935f65-00e7-4787-89ca-eaeb302cfbd6
availability_zone: AZ_2
image_id: '1dbc3a5b-0930-4652-a4f2-c5d21166ce5a'

Execute the following commands for deploying your virtual Gluster filesystem:

$ openstack stack create --template gluster_stack.yaml -e gluster-environment.yaml test_gluster
Shared Model
$ openstack stack list
Shared Model
$ openstack server list
Shared Model

EBI Data Access

EBI internal databases

Access to internal DBs in Read/Write or Read Only mode is available through the policy below to Embassy tenants and to Embassy Hosted Kubernetes (EHK) users.

The Embassy Tenant will…

  • Contact the internal EBI database owner (GTL) to gain their explicit permission. This may be easiest by asking their EBI GTL sponsor to contact the db owner if the Embassy tenant is not an EBI staff member
  • Request access from the EBI DB team by emailing itsupport@ebi.ac.uk

The Database Team will…

If read-only access is requested

  • Contact the Database Owner (GTL) to establish approval for the new user
  • Give you the connection information (a dedicated database user will be required for the read-only Embassy connection, e.g. embassy_ro)
  • lLiaise with the Networking/Security team for the opening of the connection port allowing the database user to connect
  • Link all relevant information back to the RT ticket

If read-write access is requested

  • Apply a hardened security profile to the database, ensuring the following (downtime may be required to implement the increased security):
    • No non-standard plugins or extensions are installed that might allow OS commands to be run via SQL (lib_mysqludf_sys in MySQL, “untrusted” languages in PostgreSQL, etc)
    • The latest OS and DB versions and patches are installed as per TSC guidelines, and schedules applied in order to minimize exposure to known bugs
    • There is an encrypted SSL channel for the database connection credentials
    • There is a dedicated database user for the Embassy write connection (e.g. embassy_rw)

The Database Owner (GTL) will…

  • Consider the implications of this new mode of access to any other stakeholders of the same database (Service Teams/Technical Leads) and be content there is consensus to proceed
  • Ensure the internal EBI database does not contain human or other data that requires controlled access
  • Be aware of the security implications of granting write access to the internal database from a location external to the EBI network and continue to maintain full responsibility for the internal database (Embassy is in a DMZ outside the internal EBI Network and is therefore accessible from the Internet, secured by the tenant admin)
  • Approve access for the new user

The Embassy Tenant Admin will…

  • Deploy and maintain strict security procedures to mitigate risk, including:
    • Ensuring the tenancy has Security Groups applied
    • Arranging access via SSH keys controlled through a bastion host with an active firewall
    • Ensuring SSH key access/database credentials should be given to named users and not redistribute
    • Closing any reported security vulnerabilities as soon as possible

EGA dataset

Please follow these steps for accessing the EGA dataset in EBI:

  1. Install the following dependencies:
1
~# yum install maven git fuse fuse-libs
  1. Download and build ega-fuse-client:
1
2
3
$ git clone https://github.com/EGA-archive/ega-fuse-client.git
$ cd ega-fuse-client
$ mvn package
  1. Allow non-root users specifiy fuse mount options. Your /etc/fuse.conf file should look like this
1
2
# mount_max = 1000
user_allow_other
  1. Obtain a bearer token from EGA AAI.
  2. Mount the ega dataset:
1
2
3
4
$ cd ega-fuse-client
$ mkdir ./mountpoint
$ java -Xmx8G -jar target/EgaFUSE-1.0-SNAPSHOT.jar -t y0urBe4rerTokEnFroMEGAaaii -m ./mountpoint > /preferred/path/ega-fuse-client-`date +%Y%m%d%H%M%S`.log 2>&1 &
$ ls ./mountpoint/Dataset

Note

Expected download speed is 5-10MB/s depending on the network load.

FiRe Archive

For private access, Embassy Cloud users can access the FiRe archive in the same way they do within EBI. Use your FiRe credentials and access the same endpoint urls for managing your FiRe objects.

The public access endpoint offers a method for accessing files which are publicly available on transfer services, but without the overhead of crossing WAN borders since this public access service is hosted internally in the EBI network. The public access endpoint is implemented over HTTP and the file content will we streamed in the HTTP response if a valid HTTP request was made.

The following endpoints can be employed depending on your location. Embassy Cloud is in the Hemel Hempstead data centre.

Public Endpoints
Endpoint Protocol Data centre
http://hh.fire.sdo.ebi.ac.uk/fire/public http Hemel Hempstead
https://hh.fire.sdo.ebi.ac.uk/fire/public https Hemel Hempstead
http://hx.fire.sdo.ebi.ac.uk/fire/public http Hinxton
https://hx.fire.sdo.ebi.ac.uk/fire/public https Hinxton

You can check FIRE docs here [2].

[2]Until public DNS is updated, please add this entry to your /etc/hosts: 193.62.197.14 docs.fire.ebi.ac.uk

Use Case: Accessing ENA data files

This project maintains a collection of endpoints publicly available in FiRe, so you can access ENA data files through FiRe public endpoints. One example using the curl command to download two datasets:

1
2
3
4
5
6
7
8
9
 # Bandwidth test using FIRE endpoint
 curl -o /dev/null https://hh.fire.sdo.ebi.ac.uk/fire/public/era/fastq/ERR226/002/ERR2262402/ERR2262402_2.fastq.gz ;
 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 100 4645M  100 4645M    0     0  84.2M      0  0:00:55  0:00:55 --:--:-- 85.8M

 # Download using "remote name (i.e. ERR2262402_1.fastq.gz file)"
 curl https://hh.fire.sdo.ebi.ac.uk/fire/public/era/fastq/ERR226/002/ERR2262402/ERR2262402_1.fastq.gz -O --remote-name
 curl https://hh.fire.sdo.ebi.ac.uk/fire/public/era/fastq/ERR226/002/ERR2262402/ERR2262402_2.fastq.gz -O --remote-name

You also can “stream” the data like the example below, and speed up the process using tools like GNU parallel.

1
2
3
4
5
6
7
8
 # Fastq Convert To Fasta
 $ curl -s https://hh.fire.sdo.ebi.ac.uk/fire/public/era/fastq/ERR226/002/ERR2262402/ERR2262402_2.fastq.gz | gunzip -c  | paste - - - - | sed 's/^@/>/g'| cut -f1-2 | tr '\t' '\n' > my.fasta

 $ head my.fasta
 >ERR2262402.1 1/2
 GGAAAACCTTTGCTTCTCTACAACGCGGATCCTGTCCACGACGCCAACGGAGGATGTTCCGCCTACAAGGACGGAACTCACGACTATTCCGATGAAGTGAAGAACTTCTTCACACTCAGGAATATGTGGTGGGGCTACTAC
 >ERR2262402.2 2/2
 GCTCCCGTCGCCGTCCAATTGATCCTTGACGGGTCACATGCAAATATCTGTGTCTGATATGATATAAAAAACCATCCATGGAGGAACATGAAAATATTAAGTTGCCTCAGATTAAGAGAATACCTTCGAGGATAGTTCTTTTTTCGAAGA

Advanced Features

LBaaS

We have implemented LBaaSv2 in Embassy extcloud06 (Ocata).

You can create and manage load balancers with the Horizon web dashboard, however a new load balancer needs to be associated with a new security group to allow access and this can only be done on the command line via the API and the python neutron client (see below).

By default the quota for load balancers is 2.

Limitations

  • RedHat OSP11 (Ocata) does not support Octavia at the moment.
  • Embassy cloud extcloud05 is an older version of Openstack (Mitaka) and does not support LBaaSv2
  • SSL termination is currently unsupported as this requires a secrets store like Barbican which is not implemented. However you can simply terminate https connections at the member servers instead
  • No granular configuration of global HAproxy configuration variables such as global maxconn .
  • LBaaS Namespace Driver HA: The namespace driver lacks API calls to migrate between agents

LB Security Groups

Use the python neutron client

For detailed commands see section ‘LBaaS v2 operations’ in https://docs.openstack.org/ocata/networking-guide/config-lbaas.html

Floating IP

Floating IPs can be associated with load balancers in Horizon provided you have the quota available

Deleting a Load Balancer

This task is hard to achieve in Horizon and the path is not obvious. Follow these steps -

Derived from from this bug report -

  • in “Load Balancers” click on the name of the LB (e.g. “lb1”)
  • go to “Listeners” tab, click on the name of the listener
  • click on the link shown next to “Default Pool ID” (this is the trick!)
  • click on the link shown next to “Health Monitor ID”
  • now you can click on the dropdown in the top right and select “Delete Health Monitor”
  • (need to find your way back to get to the pool, worst case go the top Horizon page)
  • back on the pool page, click on the dropdown in the top right and select “Delete Pool”
  • then delete the listener, then delete the load balancer

Alternatively use the API directly with the python neutron client

Host Affinity

In Openstack you can create a server anti-affinity group to ensure your instances are on different physical hosts for HA requirements in case of host failure.

  1. Create the new group where you define the type of policy:
$ openstack server group create split-them --policy anti-affinity
+----------+--------------------------------------+
| Field    | Value                                |
+----------+--------------------------------------+
| id       | 42dbeade-fb83-4892-8e0e-662c0ab734dc |
| members  |                                      |
| name     | split                                |
| policies | anti-affinity                        |
+----------+--------------------------------------+
  1. Check you can see the policy by listing the groups using the –long option:
$ openstack server group list --long
+--------------------------------------+------------+---------------+---------+------------+---------+
| ID                                   | Name       | Policies      | Members | Project Id | User Id |
+--------------------------------------+------------+---------------+---------+------------+---------+
| 42dbeade-fb83-4892-8e0e-662c0ab734dc | split      | anti-affinity |         |            |         |
+--------------------------------------+------------+---------------+---------+------------+---------+
  1. Create a new server in the server group from the previous step.
$ openstack server create --image 44b68ad4-4a36-4d74-abb8-5fa1e1d34e07 --hint group=42dbeade-fb83-4892-8e0e-662c0ab734dc --flavor s1.tiny --nic net-id=f3935f65-00e7-4787-89ca-eaeb302cfbd6 test-instance

$ openstack server group list --long
+--------------------------------------+------------+---------------+--------------------------------------+------------+---------+
| ID                                   | Name       | Policies      | Members                              | Project Id | User Id |
+--------------------------------------+------------+---------------+--------------------------------------+------------+---------+
| 42dbeade-fb83-4892-8e0e-662c0ab734d  | split      | anti-affinity | ff56558e-2e96-4784-ad54-20683300171b |            |         |
+--------------------------------------+------------+---------------+--------------------------------------+------------+---------+

After you have created the server groups with the above mentioned commands, you can also use horizon to launch instances in the same server group to make sure they will be in different hosts. Go to the ‘Server Group’ Tab and select the server group you want.

../_images/affinity.png

Backups with Openstack CLI

Image Backups

You can backup openstack Glance images directly to Embassy S3 Object Storage.

  1. Make sure you have an active Embassy S3 bucket with enough space and you can access it.
  2. Create an instance with standard Ubuntu 18.04 cloud image, ssh to it and install the openstack client in it:
$ sudo apt  install python-openstackclient
  1. Create rc file with creds and source it
cat credsfile.rc

$ export OS_USERNAME=test
$ export OS_PROJECT_NAME=testing
$ export OS_PASSWORD=**************
$ export OS_AUTH_URL=https://extcloud06.ebi.ac.uk:13000/v2.0

$ source credsfile.rc
  1. Upload image directly to S3
$ openstack image save cirros-0.3.4-x86_64-disk.img | aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 cp - s3://testing/cirros
  1. List image in S3
$ aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 ls s3://testing/cirros
2019-04-05 16:33:05   13287936 cirros

Volume Backups

You can either use the Openstack API or use built in Unix tools like dd.

Example using dd and uploadind to S3 Object Store:

$ sudo dd if=/dev/sdb bs=64M status=progress | aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 cp - s3://testing/disk-image-1

Backup active Cinder Volumes attached to instances to S3

If a volume is ‘active’ then you need to snapshot the volume first and then create an inactive volume from the snapshot and an image from the volume

$ openstack volume snapshot create --name extra-disk --force extra-disk
$ openstack volume snapshot list
$ openstack volume create --snapshot <snapshot-name-or-id> --size <size> <new-volume-name>
$ openstack image create --volume <volume-name-or-id> <image-name>
$ openstack  image save <image-name> | aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 cp - s3://testing/<image-name>

Backup inactive Cinder Volumes unattached to instances to S3

$ openstack image create --volume <volume-name-or-id> <image-name>
$ openstack  image save <image-name> | aws --endpoint-url https://s3.embassy.ebi.ac.uk s3 cp - s3://testing/<image-name>

Note

Please delete all extra images and volumes created during this process after the backup has completed. We do not have a very large image store**

Image Download

Using a third party container to upload a glance image in extcloud05 to your local host:

~$ pwd
/home/[your home dir]

~$ mkdir image_dump

~$ cat creds.rc
OS_USERNAME=[openstack user name]
OS_PROJECT_NAME=[openstack project name]
OS_PASSWORD=[openstack password]
OS_AUTH_URL=https://extcloud05.ebi.ac.uk:13000/v2.0

~$ docker run --env-file creds.rc -v $(pwd)/image_dump:/image_dump/ -it platform9/openstack-cli

(openstack) image list -c Name -f value
ubuntu-16.04
cirros
(openstack) image save --file /image_dump/cirros cirros
(openstack) exit
~$ ls -lah image_dump
total 13M
drwxrwxr-x  2 cems cems 4.0K Mar 12 12:42 .
drwxr-xr-x 63 cems cems 4.0K Mar 12 12:40 ..
-rw-r--r--  1 root root  13M Mar 12 12:42 cirros

Monitoring

We have implemented Prometheus on all of our compute nodes, and can therefore offer tenants detailed time series monitoring. This can help when troubleshooting a compute node vs application failure.

We have an https monitoring portal provided by Grafana on both extcloud05 and extcloud06:

We collect basic metrics collection for all your deployed vms over time in your tenancy, here is an example:

Shared Model

If you wish to collect a more specific or custom subset of metrics, you can deploy your own monitoring bakcend (Prometheus, InfluxDB, ElasticSearch) within your tenancy. We will then be able to add your data source to our Grafana portal so you can access your own metrics in the same dashboard.