AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data
The following Python code will create an instance and a EBS volume. Then attaching the volume to that instance. We still need to do more work after that: such as mounting it.
We placed aws credentials in ~/.boto:
[Credentials] aws_access_key_id = "key_id" aws_secret_access_key = "secret_access_key"
Here is our code:
#!/bin/python import os import sys import boto import boto.ec2 import time AWS_ACCESS_KEY = boto.config.get('Credentials', 'aws_access_key_id') AWS_ACCESS_SECRET_KEY = boto.config.get('Credentials', 'aws_secret_access_key') conn = boto.ec2.connect_to_region("us-west-1") #### creating a new instance #### new_reservation = conn.run_instances("ami-d16a8b95", key_name="bogo", instance_type="t1.micro", security_group_ids=["sg-0841236d"]) instance = new_reservation.instances[0] conn.create_tags([instance.id], {"Name":"bogo-instance"}) while instance.state == u'pending': print "Instance state: %s" % instance.state time.sleep(10) instance.update() print "Instance state: %s" % instance.state print "Public dns: %s" % instance.public_dns_name #### Create a volume #### # create_volume(size, zone, snapshot=None, volume_type=None, iops=None) vol = conn.create_volume(10, "us-west-1c") print 'Volume Id: ', vol.id # Add a Name tag to the new volume so we can find it. conn.create_tags([vol.id], {"Name":"bogo-volume"}) # We can check if the volume is now ready and available: curr_vol = conn.get_all_volumes([vol.id])[0] while curr_vol.status == 'creating': curr_vol = conn.get_all_volumes([vol.id])[0] print 'Current Volume Status: ', curr_vol.status time.sleep(2) print 'Current Volume Zone: ', curr_vol.zone #### Attach a volume #### result = conn.attach_volume (vol.id, instance.id, "/dev/sdf") print 'Attach Volume Result: ', result
Output:
$ python launch_cv.py Instance state: pending Instance state: pending Instance state: pending Instance state: running Public dns: ec2-52-8-88-156.us-west-1.compute.amazonaws.com Volume Id: vol-71c42689 Current Volume Status: creating Current Volume Status: creating Current Volume Status: available Current Volume Zone: us-west-1c Attach Volume Result: attaching
Let's login to our instance, and using lsblk check our available disk devices and their mount points (if applicable) to help us determine the correct device name to use.
ubuntu@ip-172-31-5-233:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvdf 202:80 0 10G 0 disk xvda1 202:1 0 8G 0 disk /
$ sudo mkfs.ext4 /dev/xvdf $ sudo mkdir /vol $ sudo mount /dev/xvdf /vol
Since we have our new, empty partition, we can create it's filesystem on /dev/xvdf. The mount command will mount the filesystem on /dev/xvdf into the folder /vol. That means that going into /vol will show you the filesystem which is on /dev/xvdf.
The configuration file /etc/fstab contains the necessary information to automate the process of mounting partitions. So, we need to put our mount info on the file:
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
When we create an instance using AWS console, we can use the following script in Step 3: Configure Instance Details under Advanced Details for User data:
#!/bin/bash sudo mkfs.ext4 /dev/xvdf sudo mkdir /vol echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
Now, we can use the user_data in run_instances method like this:
myCode = """#!/bin/bash sudo mkfs.ext4 /dev/xvdf sudo mkdir /vol echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab""" #### creating a new instance #### new_reservation = conn.run_instances("ami-d16a8b95", key_name="bogo", instance_type="t1.micro", security_group_ids=["sg-0841236d"], user_data=myCode )
Here is our full code:
#!/bin/python import os import sys import boto import boto.ec2 import time conn = boto.ec2.connect_to_region("us-west-1") myCode = """#!/bin/bash sudo mkfs.ext4 /dev/xvdf sudo mkdir /vol echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab""" #### creating a new instance #### new_reservation = conn.run_instances("ami-d16a8b95", key_name="bogo", instance_type="t1.micro", security_group_ids=["sg-0841236d"], user_data=myCode ) instance = new_reservation.instances[0] conn.create_tags([instance.id], {"Name":"bogo-instance"}) while instance.state == u'pending': print "Instance state: %s" % instance.state time.sleep(10) instance.update() print "Instance state: %s" % instance.state print "Public dns: %s" % instance.public_dns_name #### Create a volume #### # create_volume(size, zone, snapshot=None, volume_type=None, iops=None) vol = conn.create_volume(10, "us-west-1c") print 'Volume Id: ', vol.id # Add a Name tag to the new volume so we can find it. conn.create_tags([vol.id], {"Name":"bogo-volume"}) # We can check if the volume is now ready and available: curr_vol = conn.get_all_volumes([vol.id])[0] while curr_vol.status == 'creating': curr_vol = conn.get_all_volumes([vol.id])[0] print 'Current Volume Status: ', curr_vol.status time.sleep(2) print 'Current Volume Zone: ', curr_vol.zone #### Attach a volume #### result = conn.attach_volume (vol.id, instance.id, "/dev/sdf") print 'Attach Volume Result: ', result
It will create a new instance and a volume to attach. Then, after rebooting the instance (or after mount -a), we can see our new volume has been mounted:
ubuntu@ip-172-31-10-2:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 776M 6.6G 11% / ... /dev/xvdf 9.8G 23M 9.2G 1% /vol
Or:
ubuntu@ip-172-31-10-2:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvdf 202:80 0 10G 0 disk /vol xvda1 202:1 0 8G 0 disk /
AWS (Amazon Web Services)
- AWS : EKS (Elastic Container Service for Kubernetes)
- AWS : Creating a snapshot (cloning an image)
- AWS : Attaching Amazon EBS volume to an instance
- AWS : Adding swap space to an attached volume via mkswap and swapon
- AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data
- AWS : Creating an instance to a new region by copying an AMI
- AWS : S3 (Simple Storage Service) 1
- AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket
- AWS : S3 (Simple Storage Service) 3 - Bucket Versioning
- AWS : S3 (Simple Storage Service) 4 - Uploading a large file
- AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively
- AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download
- AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another
- AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier
- AWS : Creating a CloudFront distribution with an Amazon S3 origin
- AWS : Creating VPC with CloudFormation
- AWS : WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution
- AWS : CloudWatch & Logs with Lambda Function / S3
- AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS
- AWS : Lambda and SNS - cross account
- AWS : CLI (Command Line Interface)
- AWS : CLI (ECS with ALB & autoscaling)
- AWS : ECS with cloudformation and json task definition
- AWS Application Load Balancer (ALB) and ECS with Flask app
- AWS : Load Balancing with HAProxy (High Availability Proxy)
- AWS : VirtualBox on EC2
- AWS : NTP setup on EC2
- AWS: jq with AWS
- AWS & OpenSSL : Creating / Installing a Server SSL Certificate
- AWS : OpenVPN Access Server 2 Install
- AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR
- AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard
- AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT
- DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)
- AWS - OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN
- AWS : Autoscaling group (ASG)
- AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation
- AWS : Adding a SSH User Account on Linux Instance
- AWS : Windows Servers - Remote Desktop Connections using RDP
- AWS : Scheduled stopping and starting an instance - python & cron
- AWS : Detecting stopped instance and sending an alert email using Mandrill smtp
- AWS : Elastic Beanstalk with NodeJS
- AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy
- AWS : Identity and Access Management (IAM) Roles for Amazon EC2
- AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts
- AWS : Identity and Access Management (IAM) sts assume role via aws cli2
- AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation
- AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)
- AWS : Amazon Route 53
- AWS : Amazon Route 53 - DNS (Domain Name Server) setup
- AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx
- AWS Amazon Route 53 : Private Hosted Zone
- AWS : SNS (Simple Notification Service) example with ELB and CloudWatch
- AWS : Lambda with AWS CloudTrail
- AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK
- AWS : Redshift data warehouse
- AWS : CloudFormation
- AWS : CloudFormation Bootstrap UserData/Metadata
- AWS : CloudFormation - Creating an ASG with rolling update
- AWS : Cloudformation Cross-stack reference
- AWS : OpsWorks
- AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)
- AWS CodeDeploy : Deploy an Application from GitHub
- AWS EC2 Container Service (ECS)
- AWS EC2 Container Service (ECS) II
- AWS Hello World Lambda Function
- AWS Lambda Function Q & A
- AWS Node.js Lambda Function & API Gateway
- AWS API Gateway endpoint invoking Lambda function
- AWS API Gateway invoking Lambda function with Terraform
- AWS API Gateway invoking Lambda function with Terraform - Lambda Container
- Amazon Kinesis Streams
- AWS: Kinesis Data Firehose with Lambda and ElasticSearch
- Amazon DynamoDB
- Amazon DynamoDB with Lambda and CloudWatch
- Loading DynamoDB stream to AWS Elasticsearch service with Lambda
- Amazon ML (Machine Learning)
- Simple Systems Manager (SSM)
- AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine
- AWS : RDS Importing and Exporting SQL Server Data
- AWS : RDS PostgreSQL & pgAdmin III
- AWS : RDS PostgreSQL 2 - Creating/Deleting a Table
- AWS : MySQL Replication : Master-slave
- AWS : MySQL backup & restore
- AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL
- AWS : Restoring Postgres on EC2 instance from S3 backup
- AWS : Q & A
- AWS : Security
- AWS : Security groups vs. network ACLs
- AWS : Scaling-Up
- AWS : Networking
- AWS : Single Sign-on (SSO) with Okta
- AWS : JIT (Just-in-Time) with Okta
Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization