Wednesday, October 24, 2018

AWS Basic Concepts

AWS Basic Concepts

   IAM  
-> AWS Identity and Access Management (IAM) is a web service that helps us securely control access to AWS resources.
-> Which enables us to Create & Control Services for user authentication or limiting access to a certain set of users on our AWS Resources.
-> By using IAM we can create Users & Groups.
-> By using IAM we can allow these IAM Groups & users to use aws services & we can also deny the usage of of services.
-> We use IAM to control who can use our AWS Resources(authentication) and what resources they can use & in what ways(authorization).
-> IAM Workflow includes Main 6 elements are Principal,Authentication,Request,Authorization,Actions, & Resources.
-> By default only aws root user will have the access to all the resources.
–> The Most important Components of IAM are Users,Groups,Policies , & Roles.
IAM USER :
-> A Root user means ADMIN can create an IAM user when there is a new employee in the Corporate.
-> Each IAM User is associated with only one AWS A/C.
-> IAM User is an entity that we create an aws environment to represent a person or a service that interact with AWS Environment.
-> The advantage of having one-to-one user is that we can assign permissions individually to that User.
IAM Group :-> A collection of IAM users is an IAM group.
-> We can use IAM groups to specify permissions for multiple users, so that any permission applied to the group, are applied to it’s users as well.
IAM Policy :
-> An IAM Policy sets permission and controls the access to AWS resources.
-> Policies are stored in aws as JSON documents.
-> Permissions specify who can have access to the resources & what actions they can perform.
-> Policies are of 2 types namely Managed Policies ,Inline Policies.
-> Manage Policie is a default policy that we attach to multiple entities like Users,groups & roles in our AWS A/C.
-> Inline policies means we create & manage our own policy that is embedded directly into a single entity like user,group or role.
IAM Role :
-> An IAM Role is a set of permissions that define what actions are allowed & denied by an entity in AWS Console.
-> It is similar to a User.
-> A Role in IAM can be accessd by any entity means an individual or AWS Service.
-> Role Permissions are temperary and user permissions are perminent.
Features of IAM :
1> we can create seperate Username & password for the individual users or resources & we can also deny access.
2> The permissions that we given to IAM Users is very granular(strict)
3> Secure access to AWS Resources for applications running on EC2 instance.
4> IAM supports MFA(Multi factor authentication) means here we provide username ,password & 6 Digit OTP for authentication.
5> There is no charges need to be pay for IAM Creation or management.
6> We can reset or rotate the IAM Passwords
Multi-factor Authentication(MFA) :-> MFA is an additional level of security process provided by AWS.
-> Here, a user’s identity is conformed for AWS login only after performing two levels of verification.
                                                        S3 Bucket 
-> Amazon S3 is a simple storage system for the Internet.
-> It is designed to make web-scale computing easier for developers.
-> Amazon S3 has a simple web services interface that can be used to store & retrieve any amount of data, at any time, from anywhere on the web.
-> It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastracture of Amazon.
-> Simple to get going, simple to use.
-> Programmatic access via web services API.
-> Amazon S3 is a object store, means whatever we uplaod to S3 is called Object.
S3 Concepts :
-> Bucket : Its a Collection of objects, Up to 100 per a/c , names up to 255 characters long.
-> Object : Amazon S3 stores data as objects within buckets. Objects are the fundamental entities stored in Amazon S3. Objects consist of object data & metadata. They are Individually addressable data item. Any number per bucket & per A/C. An object is uniquely identified within a bucket by a key(name) and a version ID.
-> Key : A Key is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key.
-> ACL : Its a Access Control List.
-> Versioning : It can be utilized to preserve, recover & restore early versions of every object we store in our Amazon S3 bucket.
-> Cross-region replication(CRR) : cross replication provides automatic copying of every object uploaded to our              buckets in different AWS regions.
Versioning must be enabled to enable CRR.
S3 Features : 
-> S3 store Object upto 5 TB in size.
-> We can Read & write entire Object.
-> Every object has a unique developer assign key.
-> S3 Collect objects into bucket.
-> Every object has a unique URL
-> S3 provides Full control of access rights.
-> S3 supports Version of an object.
-> Amazon S3 supports virtual-hosted-style and path-style access in all regions.
S3 Bucket Policy :
-> Bucket policy is an IAM policy where we can allow & deny permissions to our Amazon S3 resources.
-> Bucket Policies provide centralized, access control to buckets and objects and objects based on a variety of Conditions, including Amazon S3 operations, requesters, resources,& aspects of the request.
-> A/Cs have the power to grant bucket policy permissions & assign employees permissions based on a variety of conditions.
-> with bucket policy, we can also define security rules that apply to more than one file within a bucket.
Data protection :
-> Amazon S3 protects our data using “Data Encryption” & “Versioning”
S3 Access Control List :
-> Each bucket & object in Amazon S3 has an ACL that defines its access control Policy.
When a request is made, Amazon S3 authenticates the request using its standard authentication.
Then checks the ACL to verify sender was granted access to the bucket or object.
If the sender is approved, the request proceeds.Otherwise, Amazon S3 returns an error.
-> Bucket and object ACLs are completely independant.
S3 Use Cases :
-> S3 can be used as a Media sharing
-> S3 acts like Media/software Distribution media
-> S3 used as Backup(Server&PC)
-> S3 can also be used for online Storage and also acts for Application Storage.
How Does Amazon S3 Work :
-> When files are uploaded to the bucket, the user will specify the type of S3 storage class to be used for those specific objects.
-> Later, users can define features to the bucket like bucket policy, lifecycle policies , versioning control etc.
LifeCycle Management :  with lifecycle management we can manage & store our objects cost effectively.
-> With lifecycle management we can configure S3 to move our data between various storage classes on a defined schedule.
                                                      Amazon EC2 
-> Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web
Services (AWS) cloud.
-> Using Amazon EC2 eliminates our need to invest in hardware up front, so we can develop and deploy applications faster.
-> we can use Amazon EC2 to launch as many or as few virtual servers as we need, configure security and networking, and manage storage.
-> Amazon EC2 enables us to scale up or down to handle changes in requirements or spikes in popularity, reducing our need to forecast traffic.
-> Here we only pay for what we use & which is Highly secured one.
Features of Amazon EC2 :
Amazon EC2 provides the following features:
• Virtual computing environments, known as instances
• Preconfigured templates for our instances, known as Amazon Machine Images (AMIs), that package the bits we need for our server (including the operating system and additional software)
• Various configurations of CPU, memory, storage, and networking capacity for our instances, known as instance types
• Secure login information for our instances using key pairs (AWS stores the public key, and you store the private key in a secure place)
• Storage volumes for temporary data that’s deleted when we stop or terminate our instance, known as instance store volumes
• Persistent storage volumes for our data using Amazon Elastic Block Store (Amazon EBS), known as Amazon EBS volumes
• Multiple physical locations for our resources, such as instances and Amazon EBS volumes, known as regions and Availability Zones
• A firewall that enables us to specify the protocols, ports, and source IP ranges that can reach our instances using security groups
• Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses
• Metadata, known as tags, that we can create and assign to our Amazon EC2 resources
• Virtual networks we can create that are logically isolated from the rest of the AWS cloud, and that we can optionally connect to our own network, known as virtual private clouds (VPCs)

Key Elements of the EC2 instance Creation :Choosing an AMI :
-> An AMI or Amazon Machine image is a template that is used to create a new instance/machine based on user requirement.
-> The AMI would contain software information,Operating system information, Volume information,Access permissions.
-> AMI’s are of 2 types, namely Predefined AMIs and Custom AMIs.
->  Predefined AMIs are created by Amazon and can be modified by the user.
-> Custome AMIs are created by the user so that they can be reused.
-> we can also get AMIs from the AMI Marketplace.
Choosing an Instance type :
-> An instance type specifies the hardware specifications that are required in the machine from the previous step.
-> These instances are divided into 5 main families & these Instance types are fixed and their configurations cannot be altered, they are
1> Compute Optimized : which require lots of compute power or lots of processing power.
2> Memory Optimized : This is very good for application that require in-memory cache.
3> GPU Optimized : This is very good for application that used with gaming & which require large graphical requirement.
4> Storage Optimized :This is very good case for Storage Server.
5> General purpose : which is used when there is no specific requrement means general case.
Configure Instance :
-> We have to specify the number of instances, purchasing options, the kind of network, the subnet, when to assign a Public IP, the IAM role, the shutdown behavior and so on.
Shutdown behavior : Stoping the system & terminating system under ‘Shutdown behavior’ are completely different things means
Stopping=Temporary shutting down the system
Terminating=Returning control back to Amazon
-> We can also have Reserved Instances , these are reserved for 1 year or for 3 years and the entire amount has to paid upfront or over a span of few months.
Adding Storage :
-> Here we have the option for the selection of Storage,which includes Ephemeral storage(temporary & free) ,Amazon Elastic Block Store(permanent & paid) ,Amazon S3
-> Free users get to access up to 30 GBs of SSD or Magnetic storage(which can be found under ‘Volume Type’)
Adding Tags :
-> Tags are used for the identification of machines/Instances.
Configuring the Security groups :
-> Which will help to control traffic.
Review : Here we will review all the details and ready to launch.
Elastic IP Address : 
-> An Elastic IP address is a static IPv4 address allocated to EC2 instance associated with our AWS A/C.
-> Public Ip is a dynamic one, it will change for every time , so to overcome this changing of IP address we use EIP.
-> EIP get charged when we dont associate it to a instance.
Load Balancing :
-> Load Balancing distributes incoming application or network traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones.
-> Elastic Load Balancing supports 3 types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers.
-> We can select a load balancer based on our application needs.
-> can create, access, and manage your load balancers using AWS Management Console , CLI, SDKs, & Through API.
-> With our load balancer, we pay only for what we use.
                                                         VPC
-> Amazon Virtual Private Cloud (VPC) enables us to launch AWS resources into a virtual network that we’ve defined.
-> Here we can specify an IP address range for the VPC, add subnets, associate security groups, and configure route tables.-> Key elements of the VPC are Subnets , Route tables , Internet Gateway , NAT Gateway , Endpoints , Peering connections , Network ACLs , Security Groups , VPN Connections.
Subnets :
-> A Subnet is a range of IP addresses in our VPC. We can launch AWS resources into a specified Subnet.
Use a public Subnet for resources that must be connected to the internet, and a private Subnet for
resources that won’t be connected to the internet.
-> Each aws User will have one default VPC with Subnet in each Availability Zone.
-> If a Subnet’s traffic is routed to an internet gateway, the Subnet is known as a public Subnet.
-> Amazon VPC supports IPv4 and IPv6 addressing, and has different CIDR block size limits for each. By default, all VPCs and subnets must have IPv4 CIDR blocks.
-> If a subnet doesn’t have a route to the internet gateway, the subnet is known as a private subnet.
-> If a subnet doesn’t have a route to the internet gateway, but has its traffic routed to a virtual private gateway for a VPN connection, the subnet is known as a VPN-only subnet.

Route Tables :
-> A Route table contains a set of rules, called route, that are used to determine where network traffic is directed.
-> Each subnet must be associated with a route table, which specifies the allowed routes for outbound traffic leaving the subnet.
-> Every subnet that we create is automatically associated with the main route table for the VPC.
-> We can change the association, and we can change the contents of the main route table.
-> A subnet can only be associated with one route table at a time , but we can associate multiple subnets with the same route table.
-> VPC automatically comes with a main route table that you can modify & we can create by our own custom route table.
-> we cannot delete the main route table, but we can replace the main route table with a custom table that we’ve created.

Internet Gateway :
-> Internet Gateway allows communication between instances in our VPC & the internet.
-> An internet gateway serves 2 purposes: to provide a target in our VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.
-> An internet gateway supports IPv4 and IPv6 traffic.
-> To enable access to or from the internet for instances in a VPC subnet, you must do the following:
      –>Attach an internet gateway to our VPC.
      –>Ensure that our subnet’s route table points to the internet gateway.
      –>Ensure that instances in our subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or             IPv6 address).
      –>Ensure that our network access control and security group rules allow the relevant traffic to flow to and from                our instance.
-> 0.0.0.0/0 IP address resembles all traffic allowed.
-> We can perform Internet gateway through CLI or API.
NAT Gateway :
-> We can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.
-> we are charged for creating and using a NAT gateway in our account.
-> NAT gateway hourly usage and data processing rates apply.
-> Amazon EC2 charges for data transfer also apply.
-> To create a NAT gateway, we must specify the public subnet in which the NAT gateway should reside.
-> We must also specify an Elastic IP address to associate with the NAT gateway when we create it.
-> Each NAT gateway is created in a specific Availability Zone , there is a limit on the number of NAT gateways we can create in an Availability Zone.
-> If we no longer need a NAT gateway, we can delete it, Deleting a NAT gateway disassociates its Elastic IP address, but does not release the address from our account ,so we need to release EIP from EC2 console.
-> A NAT gateway supports 5 Gbps of bandwidth and automatically scales up to 45 Gbps. If we require more, we can distribute the workload by splitting our resources into multiple subnets, and creating a NAT gateway in each subnet.
-> We can associate exactly one Elastic IP address with a NAT gateway, we cannot disassociate an Elastic IP address from a NAT gateway after it’s created.
-> To use a different Elastic IP address for our NAT gateway, we must create a new NAT gateway .
-> We cannot associate a security group with a NAT gateway, We can use security groups for our instances in the private subnets to control the traffic to and from those instances.
-> We can use a network ACL to control the traffic to and from the subnet in which the NAT gateway is located.
-> A NAT gateway uses ports 1024–65535.
-> We can Migrate NAT Gateway.
-> A NAT gateway cannot send traffic over VPC Peering Connection, VPC endpoints, VPN connections, AWS Direct Connect, or VPC peering connections.
-> By default, IAM users do not have permission to work with NAT gateways, For this We can create an IAM user policy
-> We can Perform NAT Gateway tasks through CLI or API or VPC Wizard Console.

Endpoints : 
-> A VPC endpoint enables us to privately connect our VPC to supported AWS services and VPC endpoint services by  PrivateLink without requiring an internet gateway, NAT device, VPN connection,or AWS Direct Connect connection.
-> By default, IAM users do not have permission to work with endpoints, We can create an IAM user policy that grants users the permissions to create, modify, describe, and delete endpoints.

Peering connections : 
-> A VPC peering connection is a networking connection between 2 VPCs that enables us to route traffic between them privately.
-> Instances in either VPC can communicate with each other as if they are within the same network.
-> We can create a VPC peering connection between our own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.
Security Groups – Network ACLs :
-> 
Security groups act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level.-> NACL is an optional layer of security for our VPC that acts as a firewall for controlling traffic in and out of one or more subnets at the subnet level.
-> By default VPC associates with a default NACL.
-> We can also create a custom NACL.-> It supports/allows IPv4 & IPv6 traffic.
->  Each subnet in our VPC must be associated with a network ACL ,we can associate NACL with multiple subnets.
-> however a subnet can be associated with only one network ACL at a time.
VPN Connections :
-> a virtual private network (VPN) connection enables communication with our corporate network.

Puppet Quick Start

Puppet plan:
=======================

1.Installing Puppet Enterprise Server & Nodes

2.Creating Manifest File
3.Configuring the Facts,Varaibles & Control Statements in Manifest File
4.Deploying EC2 Resources With Puppet

Installing Puppet Enterprise Server & Nodes:

Step 1: Create Puppet Master & Puppet Node in Server Lab Control Center
 -> Goto Server Lab Control Center->Distribution
-> Choose the Puppet Enterpise 2016->Start the Server
-> After Creating the Servers it displays user,passwd details at the end of Page
-> Copy the Public Hostname
Step 2: Login to Server in CLI using Public Hostname
1
 ssh username@PublicHostname
Step 3: Configure the /etc/hosts with your IP Address & Hostname in Puppet Master & Puppet Node
1
  vi /etc/hosts
Step 4: Run the Puppet installer Command
1
  cd /root/puppet-enterprise-2016.2.1-el-7-x86_64
1
  ./puppet-enterprise-installer
Choose the Guided install
Step 5: Access the Puppet in the Console
https://SERVERIP:3000
->Let’s Get Started->Select Monolithic
->Select Install on this Server->Select Puppet Master FQDN
->Enable Application Orchestration->Install PostgresSQL on Puppet Server
->Enter Console Admin passwd then Submit->Continue->Validating the installation
->Deploy Now->Click on Start using Puppet Enterprise->Login with username & Passwd
Step 6: Copy the Unsigned Cerificates for Installing the node
            Login to Puppet Enterprise Console->Go to Nodes tab
->Select Unsigned Certificates->Copy the Url
Step 7: Install the Node in CLI
            Login to Node->Paste the Unsigned Certificate Url
Step 8: Check the Node is added or not in the Puppet Enterprise Console
Goto Nodes->Select Inventory->Then we see the Node Details
Goto Nodes->Unsigned Certificates->Accept the Node DNS Certificate
Step 9: Apply the catalog in CLI of Agent
1
puppet agent -t

Puppet Manifests:Resources,Attributes & Parameters

Step 1: Generate a Password using the openssl
          
1
openssl  passwd  -1
Copy the generated passwd & Paste in site.pp
Step 2: Create manifests
           
1
cd  /etc/puppetlabs/code/environments/production/manifests
1
 vi site.pp
Add the Code
    node default {

# ensures server has user/pass credentials on all nodes

user { ‘resource type’ :
name                  => ‘username’,
groups               => ‘groupname’,
managehome     =>  true,
password          => ‘passwd generated by openssl’ ,
ensure              => present
}
}
Step 3: Validate the Puppet Code of Manifest File
           
1
puppet parser validate  site.pp
Step 4: Compile the Puppet Code of Manifest File
1
 puppet apply   - -noop site.pp 
           #With no changes made
1
 puppet apply  site.pp    
                       #Without no Changes made
Step 5: Check the individual status of Nodes in Puppet Enterprise Console
Configuration -> Overview -> Enforcement
We will see detailed info of nodes are successful & no changes like
            0 with failed changes
            1 with successful changes
            2 with no changes
Step 6: Check the reports of the Nodes in Puppet Enterprise Console   
Configuration ->  Reports -> Log -> See the Logs of Node actually Puppet made the changes

Facts, Variables & Control Statements

Step 1: Create module directory
           
1
cd /etc/puppetlabs/code/environments/production
1
 mkdir motd
Under this directory Create a directories examples,facts.d,files,lib,manifests,spec,
     templates   
 Step 2: Create Manifest file under the motd directory
1
cd motd/manifests
1
vi init.pp
#Add the Code
            class motd {
File { ‘/etc/motd’:
path           => ‘/etc/motd’,
ensure        => file,
source        =>  ‘puppet://modules/motd/motd’,
}
}
Step 3: Check the domain details & code  using the fact
1
facter  | grep -A  5  -B  5 domain
Select domain name from Networking attribute
Step 4: Add the Facts code,Control Statements & Variables in the Manifest file for Puppet Master & Puppet Node
1
vi init.pp
#Add the Code
            class motd {
$hostname       =  $facts[‘networking’][‘fqdn’]
$os_name         = $facts[‘os’][‘name’]
$os_release        = $facts[‘os’][‘release’]
if  $hostname = = ‘hostname of server’ {
File { ‘/etc/motd’:
path       => ‘/etc/motd’,
ensure    => file,
source    =>  ‘puppet://modules/motd/motd’,
content  => “\n\n[Puppet Master] ${hostname} ${os_name} ${os_release}\n\n”,
}
}
elseif  $hostname = = ‘hostname of server’ {
File { ‘/etc/motd’:
path       => ‘/etc/motd’,
ensure    => file,
source    =>  ‘puppet://modules/motd/motd’,
content  => “\n\n[Puppet Master] ${hostname} ${os_name} ${os_release}\n\n”,
}
}
}
Step 5: Validate the Puppet Code of Manifest File
           
1
puppet parser validate  init.pp
Step 6: Compile the Puppet Code of Manifest File
1
 puppet apply   - -noop init.pp 
1
  puppet apply  init.pp  
Step 7: Check the motd file
1
cat /etc/motd
Step 8: Login one of the Puppet

Deploying EC2 Resources With Puppet

  • SET UP
  • Development
  • Deploying the Instances
  • Terminating the Instances
  • Check in the AWS Console the instances is Shutting down

SET UP:

Step 1: Check the Puppet Agent locally Installed on your Machine
           
1
which puppet
Step 2: Verify The Version of Puppet
1
puppet   -V
Step 3: Install the Install aws-sdk-core
1
  sudo /opt/puppetlabs/puppet/bin/gem install aws-sdk-core retries
Step 4: Install Puppetlabs-aws Module
1
 /opt/puppetlabs/bin/puppet module install puppetlabs-aws
Step 5: Verify Configured Path
puppet config print modulepath
It will return the Currently Configured Module Path
1
 /Users/rilindo/.puppetlabs/etc/code/modules:/opt/puppetlabs/puppet/modules
Step 6: Export the AWS Credentials in your Shell
1
   export AWS_ACCESS_KEY_ID=your_access_key_id
1
      export AWS_SECRET_ACCESS_KEY=your_secret_access_key
Step 7: Check the AWS Credentials file
1
  cat ~/.aws/credentials
This data is present in the Credentials file
[default]
1
 aws_access_key_id = your_access_key_id
1
aws_secret_access_key = your_secret_access_key
Step 8: Finallly Verify the Setup By running the Puppet Resource
puppet resource ec2_instance

Development:

Step 1: Create a directory  src/puppet/modules
1
  mkdir –p src/puppet/modules
1
     cd src/puppet/modules/
1
 mkdir aws_demo
Step 2: Under aws_demo directory ,Create a .pp file
1
  vi aws_demo/create.pp
 in that vi editor Insert Puppet Code
ec2_instance { 'myinstancename':
ensure              => present,
region              => 'us-west-1',
image_id            => 'ami-48db9d28',
instance_type       => 't2.micro',
security_groups     => ['Access'],
subnet              => 'Public',
}
Step 3: Run the Puppet Parser to Validate the file
1
puppet parser validate aws_demo/create.pp

Deploying the Instance:

Step 1: Run the Puppet apply to .pp file
1
 puppet apply aws_demo/create.pp
->  Login to your AWS Web console and go to EC2, the Instances
->  You will see your instance being created in the AWS

Terminating the Instances:

Step 1: Copy the fiel create.pp to destroy.pp file
1
cp aws_demo/create.pp aws_demo/destroy.pp
1
sudo vi aws_demo/destroy.pp
Makes some changes in Puppet code
ec2_instance { ‘myinstancename’:
ensure        => absent,
region        => ‘us-west-1’,
image_id      => ‘ami-48db9d28’,
instance_type=> ‘t2.micro’,
security_groups     => [‘Access’],
subnet              => ‘Public’,
}
Step 2: Validate the destroy.pp file using Puppet parser
1
   puppet parser validate aws_demo/destroy.pp
Step 3: Run the destroy.pp file
1
  puppet apply aws_demo/destroy.pp

Check in the AWS Console the instances is Shutting down

Setup with Hiera:
Step 1: Create a directory hierdata under ~/src/puppet
1
   mkdir hieradata
Step 2: Create a yaml file
1
  vi hieradata/common.yaml
Insert the Following Attributes
---
ami: ami-48db9d28
region: us-west-1
Step 3: Create a hiera.yml file under /src/puppet directory
             :hierarchy:
           - common
             :backends:
          - yaml
           :yaml:
           :datadir: 'hieradata'
Step4:Copy modules/aws_demo/create.pp to modules/aws_demo/create_with_hiera.pp
1
Vi modules/aws_demo/create_with_hiera.pp
         ec2_instance { 'myinstancename_withhiera':
         ensure              => present,
         region              => hiera('region'),
         image_id            => hiera('ami'),
         instance_type       => 't2.micro',
         security_groups     => ['Access'],
         subnet              => 'Public',
}
 Step 5: Validating the Code
1
 puppet parser validate modules/aws_demo/create_with_hiera.pp
Step 6: Execute the code      
1
puppet apply modules/aws_demo/create_with_hiera.pp --hiera_config hiera.yaml
->Log into AWS Console Then Verify the Instances

No comments:

Post a Comment