-
Notifications
You must be signed in to change notification settings - Fork 0
/
fundamentals_of_AWS.txt
811 lines (515 loc) · 45.9 KB
/
fundamentals_of_AWS.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
AMAZON elastic compute cloud - EC2
(virtual server)
What is a Client-Server Model?
more about AWS and how almost all of modern computing uses a basic client-server model. Let’s recap what a client-server model is
In computing, a client can be a web browser or desktop application that a person interacts with to make requests to computer servers. A server can be services such as Amazon Elastic Compute Cloud (Amazon EC2), a type of virtual server.
For example, suppose that a client makes a request for a news article, the score in an online game, or a funny video. The server evaluates the details of this request and fulfills it by returning the information to the client.
----------------------------------------------------------------------------------------
Deployment Models for Cloud Computing
The three cloud computing deployment models are cloud-based, on-premises, and hybrid.
Cloud-based Deployment:
-----------------------
Run all parts of the application in the cloud.
Migrate existing applications to the cloud.
Design and build new applications in the cloud.
For example, a company might create an application consisting of virtual servers, databases, and networking components that are fully based in the cloud.
On-premises Deployment:
-----------------------
Deploy resources by using virtualization and resource management tools.
Increase resource utilization by using application management and virtualization technologies.
On-premises deployment is also known as a private cloud deployment.
In this model, resources are deployed on premises by using virtualization and resource management tools.
For example, you might have applications that run on technology that is fully kept in your on-premises data center. Though this model is much like legacy IT infrastructure, its incorporation of application management and virtualization technologies helps to increase resource utilization.
Hybrid Deployment:
-------------------
Connect cloud-based resources to on-premises infrastructure.
Integrate cloud-based resources with legacy IT applications.
For example, suppose that a company wants to use cloud services that can automate batch data processing and analytics. However, the company has several legacy applications that are more suitable on premises and will not be migrated to the cloud. With a hybrid deployment, the company would be able to keep the legacy applications on premises while benefiting from the data and analytics services that run in the cloud.
---------------------------------------------------------------------------------------------------------------
Benefits of Cloud Computing:
Trade upfront expense for variable expense:
--------------------------------------------
Upfront expense refers to data centers, physical servers, and other resources that you would need to invest in before using them. Variable expense means you only pay for computing resources you consume instead of investing heavily in data centers and servers before you know how you’re going to use them.
By taking a cloud computing approach that offers the benefit of variable expense, companies can implement innovative solutions while saving on costs.
Stop spending money to run and maintain data centers:
-------------------------------------------------------
Computing in data centers often requires you to spend more money and time managing infrastructure and servers. A benefit of cloud computing is the ability to focus less on these tasks and more on your applications and customers.
Stop guessing capacity:
-----------------------
With cloud computing, you don’t have to predict how much infrastructure capacity you will need before deploying an application. For example, you can launch Amazon EC2 instances when needed, and pay only for the compute time you use. Instead of paying for unused resources or having to deal with limited capacity, you can access only the capacity that you need. You can also scale in or scale out in response to demand.
Benefit from massive economies of scale:
-----------------------------------------
By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers can aggregate in the cloud, providers, such as AWS, can achieve higher economies of scale. The economy of scale translates into lower pay-as-you-go prices.
Increase speed and agility:
---------------------------
The flexibility of cloud computing makes it easier for you to develop and deploy applications.
This flexibility provides you with more time to experiment and innovate. When computing in data centers, it may take weeks to obtain new resources that you need. By comparison, cloud computing enables you to access new resources within minutes.
Go global in minutes:
-----------------------
The global footprint of the AWS Cloud enables you to deploy applications to customers around the world quickly, while providing them with low latency. This means that even if you are located in a different part of the world than your customers, customers are able to access your applications with minimal delays.
======================================================================================================================================================
EC2 for compute - servers are virtual and the servers you use to gain access to virtual servers is called EC2.
problems in real servers : time and money it takes to get up and running with on premises resources is fairly high
do a bunch of research to see what type of servers you want to buy, and how many you'll need.
Then you purchase that hardware upfront. [SOUND] You'll wait for multiple weeks or months for a vendor to deliver those servers to you. You then take them to a data center that you own or rent to install them rack and stack them and wire them all up.
you're stuck with them, whether you use them or not.
AWS already built and secured the datacenters. AWS has already bought the servers, racked and stacked them, and they are already online, ready to be used.
AWS is constantly operating a massive amount of compute capacity. And you can use whatever portion of that capacity, when you need it.
All you have to do is request the EC2 instances you want, and they will launch and boot up, ready to be used within a few minutes.
EC2 runs on top of physical hosts machines managed by AWS using virtualization technology.
When you spin up an EC2 instance, you weren't necessarily taking an entire host to yourself. Instead, you are sharing the host, with multiple other instances, otherwise known as virtual machines. And a hypervisor running on the host machine is responsible for sharing the underlying physical resources between the virtual machines.
---------------------------------------------------------------------------------------------------------------------------------------------------
How Amazon EC2 works
1)Launch
Begin by selecting a template with basic configurations for your instance. These configurations include the operating system, application server, or applications. You also select the instance type, which is the specific hardware configuration of your instance.
2)Connect
You can connect to the instance in several ways. Your programs and applications have multiple different methods to connect directly to the instance and exchange data. Users can also connect to the instance by logging in and accessing the computer desktop.
3)Use
After you have connected to the instance, you can begin using it. You can run commands to install software, add storage, copy and organize files, and more.
Types of ec2 instances:
AWS has different types of EC2 Instances that you can spun up and deploy into AWS environment. Each instance type is grouped under an instance family and are optimized for certain types of tasks.
The different instance families in EC2 are
general purpose,
General purpose instances:
-------------------------
General purpose instances provide a balance of compute, memory, and networking resources. You can use them for a variety of workloads, such as:
application servers:
--------------------
gaming servers
backend servers for enterprise applications
small and medium databases
good balance of compute, memory and networking resources, and can be used for a variety of diverse workloads like, web service or code repositories.
compute optimized:
--------------------
batch processing workloads that require processing many transactions in a single group.
gaming service, high performance computing, or HPC, and even scientific modeling.
memory optimized
-------------------
good for memory intensive tasks.
accelerated computing,
graphics processing, or data pattern matching, as they use (hardware accelerators.)
storage optimized :
--------------------
instances include distributed file systems,
data warehousing applications, and high-frequency online transaction processing (OLTP) systems.
In computing, the term input/output operations per second (IOPS)
---------------------------------------------------------------------------------------------------------------------------------------------------------
Payement methods for ec2 (billling)
Ondemand : What that means is that you only pay for the duration that your instance runs for. This can be per hour or per second, depending on the instance type and operating system you choose to run.
----------- You don't need any prior contracts or communication with AWS to use on demand pricing
Savings plan : Savings plan offers low prices on EC2 usage in exchange for a commitment to a consistent amount of usage. Measured in dollars per hour for a one or three year term. This flexible pricing model can therefore provide savings of up to 72% on your AWS compute usage.
------------
reserved instances : reserve an instance at a particular place with a particular s[ec and use it for 1-3 years
------------------ (less flexible)
Spot instance : Amazon EC2 computing capacity, for up to 90% of the on demand price. The catch here is that AWS can reclaim the instance at any time they need it. Giving you a two minute warning to finish up work, and save state. You can always resume later if needed. So when choosing spot instances, make sure your workloads can tolerate being interrupted.
---------------- dedicated hosts. Which are physical hosts dedicated for your use for EC2. These are usually for meeting certain compliance requirements, and nobody else will share tenancy of that host.
--------------------------------------------------------------------------------
Scalablity on ec2
------------------
Amazon EC2 Auto Scaling.
Within Amazon EC2 Auto Scaling, you can use two approaches:
dynamic scaling
predictive scaling.
Dynamic scaling responds to changing demand.
Predictive scaling automatically schedules the right number of Amazon EC2 instances based on predicted demand.
----------------------------------------------------------------------
Elastic Load Balancer:
---------------------
A load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group. This means that as you add or remove Amazon EC2 instances in response to the amount of incoming traffic, these requests route to the load balancer first. Then, the requests spread across multiple resources that will handle them.
--------------------------------------------------------------------------
Simple Notification Service (Amazon SNS)
----------------------------------------
Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service. Using Amazon SNS topics, a publisher publishes messages to subscribers. This is similar to the coffee shop; the cashier provides coffee orders to the barista who makes the drinks.
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.
--------------------------------------------------------------------------
Amazon SQS:
-----------
is a distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component where a queue is a temporary repository for messages that are awaiting processing.
With the help of SQS, you can send, store and receive messages between software components at any volume without losing messages.
An EC2 instance looks "What the user is looking for?", it then puts the message in a queue to the SQS. An EC2 instance pulls queue. An EC2 instance continuously pulling the queue and looking for the jobs to do. Once it gets the job, it then processes it. It interrogates the Airline service to get all the best possible flights. It sends the result to the web server, and the web server sends back the result to the user. A User then selects the best flight according to his or her budget.
----------------------------------------------------------
AWS Lambda:
----------
You upload your code to Lambda.
You set your code to trigger from an event source, such as AWS services, mobile applications, or HTTP endpoints.
Lambda runs your code only when triggered.
You pay only for the compute time that you use. In the previous example of resizing images, you would pay only for the compute time that you use when uploading new images. Uploading the images triggers Lambda to run code for the image resizing function.
=============================================================================================
Selecting a Region:
----------------------
Compliance with data governance and legal requirements
Proximity to your customers
Available services within a Region
Pricing
-------------------------------------------------------
Availability Zones:
-----------------
An Availability Zone is a single data center or a group of data centers within a Region. Availability Zones are located tens of miles apart from each other. This is close enough to have low latency (the time between when content requested and received) between Availability Zones. However, if a disaster occurs in one part of the Region, they are distant enough to reduce the chance that multiple Availability Zones are affected.
Amazon EC2 instance in a single Availability Zone
Suppose that you’re running an application on a single Amazon EC2 instance in the Northern California Region. The instance is running in the us-west-1a Availability Zone. If us-west-1a were to fail, you would lose your instance.
Amazon EC2 instances in multiple Availability Zones
A best practice is to run applications across at least two Availability Zones in a Region. In this example, you might choose to run a second Amazon EC2 instance in us-west-1b.
-------------------------------------------------------------------------------------------------
Edge Locations
----------------
Origin
Suppose that your company’s data is stored in Brazil, and you have customers who live in China. To provide content to these customers, you don’t need to move all the content to one of the Chinese Regions.
Edge Location
Instead of requiring your customers to get their data from Brazil, you can cache a copy locally at an edge location that is close to your customers in China.
Customer
When a customer in China requests one of your files, Amazon CloudFront retrieves the file from the cache in the edge location and delivers the file to the customer. The file is delivered to the customer faster because it came from the edge location near China instead of the original source in Brazil.
----------------------------------------------------------------
Ways to Interact with AWS Services
AWS Management Console:
----------------------
The AWS Management Console is a web-based interface for accessing and managing AWS services. You can quickly access recently used services and search for other services by name, keyword, or acronym. The console includes wizards and automated workflows that can simplify the process of completing tasks.
You can also use the AWS Console mobile application to perform tasks such as monitoring resources, viewing alarms, and accessing billing information. Multiple identities can stay logged into the AWS Console mobile app at the same time.
AWS Command Line Interface:
----------------------------
AWS CLI enables you to control multiple AWS services directly from the command line within one tool. AWS CLI is available for users on Windows, macOS, and Linux.
By using AWS CLI, you can automate the actions that your services and applications perform through scripts. For example, you can use commands to launch an Amazon EC2 instance, connect an Amazon EC2 instance to a specific Auto Scaling group, and more.
Software development kits (SDKs):
-------------------------------------
SDKs make it easier for you to use AWS services through an API designed for your programming language or platform. SDKs enable you to use AWS services with your existing applications or create entirely new applications that will run on AWS.
----------------------------------------------------------------------------------
AWS Elastic Beanstalk:
-----------------------------
With AWS Elastic Beanstalk, you provide code and configuration settings, and Elastic Beanstalk deploys the resources necessary to perform the following tasks:
Adjust capacity
Load balancing
Automatic scaling
Application health monitoring
---------------------------------------------------------------------
AWS CloudFormation:
------------------------
With AWS CloudFormation, you can treat your infrastructure as code. This means that you can build an environment by writing lines of code instead of using the AWS Management Console to individually provision resources.
AWS Outposts:
---------------
Extend AWS infrastructure and services to your on-premises data center.
--------------------------------------------------------------------------
Amazon Virtual Private Cloud (Amazon VPC):
----------------------------------------------
A networking service that you can use to establish boundaries around your AWS resources is Amazon Virtual Private Cloud (Amazon VPC).
Amazon VPC enables you to provision an isolated section of the AWS Cloud. In this isolated section, you can launch resources in a virtual network that you define. Within a virtual private cloud (VPC), you can organize your resources into subnets. A subnet is a section of a VPC that can contain resources such as Amazon EC2 instances.
Internet gateway:
-----------------
To allow public traffic from the internet to access your VPC, you attach an internet gateway to the VPC.
anyone from the internet can access the services from the vpc.
Without an internet gateway, no one can access the resources within your VPC.
What if you have a VPC that includes only private resources?
Virtual private gateway:
------------------------
A virtual private gateway enables you to establish a virtual private network (VPN) connection between your VPC and a private network, such as an on-premises data center or internal corporate network. A virtual private gateway allows traffic into the VPC only if it is coming from an approved network.
AWS Direct Connect:
------------------------
AWS Direct Connect is a service that enables you to establish a dedicated private connection between your data center and a VPC.
Suppose that there is an apartment building with a hallway directly linking the building to the coffee shop. Only the residents of the apartment building can travel through this hallway.
This private hallway provides the same type of dedicated connection as AWS Direct Connect. Residents are able to get into the coffee shop without needing to use the public road shared with other customers.
---------------------------------------------------------------------------------------------------------------------------
Subnets and Network Access Control Lists :
----------------------------------------------
Subnets
------------
A subnet is a section of a VPC in which you can group resources based on security or operational needs. Subnets can be public or private.
First, customers give their orders to the cashier. The cashier then passes the orders to the barista. This process allows the line to keep running smoothly as more customers come in.
Suppose that some customers try to skip the cashier line and give their orders directly to the barista. This disrupts the flow of traffic and results in customers accessing a part of the coffee shop that is restricted to them.
To fix this, the owners of the coffee shop divide the counter area by placing the cashier and the barista in separate workstations. The cashier’s workstation is public facing and designed to receive customers. The barista’s area is private. The barista can still receive orders from the cashier but not directly from customers.
Public subnets contain resources that need to be accessible by the public, such as an online store’s website.
Private subnets contain resources that should be accessible only through your private network, such as a database that contains customers’ personal information and order histories.
In a VPC, subnets can communicate with each other. For example, you might have an application that involves Amazon EC2 instances in a public subnet communicating with databases that are located in a private subnet.
Network Access Control Lists (ACLs):
------------------------------------
A network access control list (ACL) is a virtual firewall that controls inbound and outbound traffic at the subnet level.
Each AWS account includes a default network ACL. When configuring your VPC, you can use your account’s default network ACL or create custom network ACLs.
By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules. For custom network ACLs, all inbound and outbound traffic is denied until you add rules to specify which traffic to allow. Additionally, all network ACLs have an explicit deny rule. This rule ensures that if a packet doesn’t match any of the other rules on the list, the packet is denied.
Stateless Packet Filtering
Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound.
allow all inbound traffic by default.
Stateful Packet Filtering
Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.
Security groups are stateful and deny all inbound traffic by default.
Network Access Control Lists (ACLs) --- stateless
Security groups -- statefull
==================================================================================================================
Amazon Route 53
Amazon Route 53 is a DNS web service. It gives developers and businesses a reliable way to route end users to internet applications hosted in AWS.
Suppose that AnyCompany’s application is running on several Amazon EC2 instances.These instances are in an Auto Scaling group that attaches to an Application Load Balancer.
A customer requests data from the application by going to AnyCompany’s website.
Amazon Route 53 uses DNS resolution to identify AnyCompany.com’s corresponding IP address, 192.0.2.0. This information is sent back to the customer.
The customer’s request is sent to the nearest edge location through Amazon CloudFront.
Amazon CloudFront connects to the Application Load Balancer, which sends the incoming packet to an Amazon EC2 instance.
Suppose that AnyCompany’s application is running on several Amazon EC2 instances. These instances are in an Auto Scaling group that attaches to an Application Load Balancer.
A customer requests data from the application by going to AnyCompany’s website.
Amazon Route 53 uses DNS resolution to identify AnyCompany.com’s corresponding IP address, 192.0.2.0. This information is sent back to the customer.
The customer’s request is sent to the nearest edge location through Amazon CloudFront.
Amazon CloudFront connects to the Application Load Balancer, which sends the incoming packet to an Amazon EC2 instance.
====================================================================================================================
STORAGE:
=======
Instance Stores:
----------------
Block-level storage volumes behave like physical hard drives.
An instance store provides temporary block-level storage for an Amazon EC2 instance.
An instance store is disk storage that is physically attached to the host computer for an EC2 instance,
and therefore has the same lifespan as the instance. When the instance is terminated,
you lose any data in the instance store.
Amazon Elastic Block Storage (Amazon EBS):
------------------------------
Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available.
To create an EBS volume, you define the configuration (such as volume size and type) and provision it. After you create an EBS volume, it can attach to an Amazon EC2 instance.
Because EBS volumes are for data that needs to persist, it’s important to back up the data. You can take incremental backups of EBS volumes by creating Amazon EBS snapshots.
Amazon Simple Storage Service (Amazon S3):
-----------------------------------------
Object Storage
In object storage, each object consists of data, metadata, and a key.
The data might be an image, video, text document, or any other type of file. Metadata contains information about what the data is, how it is used, the object size, and so on. An object’s key is its unique identifier.
Recall that when you modify a file in block storage, only the pieces that are changed are updated. When a file in object storage is modified, the entire object is updated.
Amazon S3 Storage Classes:
-------------------------------
-->How often you plan to retrieve your data
-->How available you need your data to be
Amazon S3 Standard:
-------------------------
Designed for frequently accessed data
Stores data in a minimum of three Availability Zones
Amazon S3 Standard-Infrequent Access (S3 Standard-IA):
-------------------------------------------------------
Ideal for infrequently accessed data
Similar to Amazon S3 Standard but has a lower storage price and higher retrieval price
Amazon S3 Standard-IA:
---------------------
is ideal for data infrequently accessed but requires high availability when needed.
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Ideal for infrequently accessed data
Similar to Amazon S3 Standard but has a lower storage price and higher retrieval price
Amazon S3 Standard-IA is ideal for data infrequently accessed but requires high availability when needed.
You want to save costs on storage.
Amazon S3 Intelligent-Tiering:
----------------------------------
Ideal for data with unknown or changing access patterns
Requires a small monthly monitoring and automation fee per object
In the Amazon S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, Amazon S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, Amazon S3 Standard.
Amazon S3 Glacier Instant Retrieval:
-----------------------------------
Works well for archived data that requires immediate access
Can retrieve objects within a few milliseconds
When you decide between the options for archival storage, consider how quickly you must retrieve the archived objects. You can retrieve objects stored in the Amazon S3 Glacier Instant Retrieval storage class within milliseconds, with the same performance as Amazon S3 Standard.
Amazon S3 Glacier Flexible Retrieval:
-----------------------------------------
Low-cost storage designed for data archiving
Able to retrieve objects within a few minutes to hours
is a low-cost storage class that is ideal for data archiving. For example, you might use this storage class to store archived customer records or older photos and video files.
Amazon S3 Glacier Deep Archive:
--------------------------------
Lowest-cost object storage class ideal for archiving
Able to retrieve objects within 12 hours
Amazon S3 Outposts:
--------------------
Creates S3 buckets on Amazon S3 Outposts
Makes it easier to retrieve, store, and access data on AWS Outposts
Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts environment. Amazon S3 Outposts is designed to store data durably and redundantly across multiple devices and servers on your Outposts. It works well for workloads with local data residency requirements that must satisfy demanding performance needs by keeping data close to on-premises applications.
--------------------------------------------------------------------
File Storage:
-------------------
In file storage, multiple clients (such as users, applications, servers, and so on) can access data that is stored in shared file folders. In this approach, a storage server uses block storage with a local file system to organize files. Clients access data through file paths.
Compared to block storage and object storage, file storage is ideal for use cases in which a large number of services and resources need to access the same data at the same time.
Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud services and on-premises resources. As you add and remove files, Amazon EFS grows and shrinks automatically. It can scale on demand to petabytes without disrupting applications.
Comparing Amazon EBS and Amazon EFS
Amazon EBS:
-----------
An Amazon EBS volume stores data in a single Availability Zone.
To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
Amazon EFS:
---------------
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones.
The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
-----------------------------------------------------------------------------------------------------------------------------------
Amazon Relational Database Service (Amazon RDS):
---------------------------------------------
Relational databases:
-------------------
In a relational database, data is stored in a way that relates it to other pieces of data.
Amazon RDS is available on six database engines, which optimize for memory, performance, or input/output (I/O). Supported database engines include:
Amazon Aurora
PostgreSQL
MySQL
MariaDB
Oracle Database
Microsoft SQL Server
Amazon Aurora:
--------------
Amazon Aurora is an enterprise-class relational database. It is compatible with MySQL and PostgreSQL relational databases. It is up to five times faster than standard MySQL databases and up to three times faster than standard PostgreSQL databases.
----------------------------------------------------------------------------------------
Amazon DynamoDB
------------------
Amazon DynamoDB is a key-value database service. It delivers single-digit millisecond performance at any scale.
Serverless;
------------
DynamoDB is serverless, which means that you do not have to provision, patch, or manage servers.
You also do not have to install, maintain, or operate software.
Automatic Scaling:
------------------
As the size of your database shrinks or grows, DynamoDB automatically scales to adjust for changes in capacity while maintaining consistent performance.
This makes it a suitable choice for use cases that require high performance while scaling.
===================================================================================================================================
Amazon Redshift:
------------------
Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.
------------------------------------------------------------------------------------------------
AWS Database Migration Service (dms)
-----------------------------------
AWS Database Migration Service (AWS DMS) enables you to migrate relational databases, nonrelational databases, and other types of data stores.
With AWS DMS, you move data between a source database and a target database.
The source and target databases can be of the same type or different types.
During the migration, your source database remains operational, reducing downtime for any applications that rely on the database.
=================================================================================================================================
security image
Shared Responsibility Model
The AWS Shared Responsibility Model
---------------------------------------------------
User Permission and Access
IAM users, groups, and roles
IAM policies
Multi-factor authentication
IAM users
An IAM user is an identity that you create in AWS. It represents the person or application that interacts with AWS services and resources. It consists of a name and credentials.
By default, when you create a new IAM user in AWS, it has no permissions associated with it. To allow the IAM user to perform specific actions in AWS, such as launching an Amazon EC2 instance or creating an Amazon S3 bucket, you must grant the IAM user the necessary permissions.
Best practice:
We recommend that you create individual IAM users for each person who needs to access AWS.
Even if you have multiple employees who require the same level of access, you should create individual IAM users for each of them. This provides additional security by allowing each IAM user to have a unique set of security credentials.
IAM policies
An IAM policy is a document that allows or denies permissions to AWS services and resources.
IAM policies enable you to customize users’ levels of access to resources. For example, you can allow users to access all of the Amazon S3 buckets within your AWS account, or only a specific bucket.
Best practice:
Follow the security principle of least privilege when granting permissions.
By following this principle, you help to prevent users or roles from having more permissions than needed to perform their tasks.
--------------------------------------------------------------------------------------------------------
When you create an organization, AWS Organizations automatically creates a root, which is the parent container for all the accounts in your organization.
Organizational Units
In AWS Organizations, you can group accounts into organizational units (OUs) to make it easier to manage accounts with similar business or security requirements. When you apply a policy to an OU, all the accounts in the OU automatically inherit the permissions specified in the policy.
org
|
org units
|
users
///heirarchy
----------------------------------------------------------
Denial-of-Service Attacks
A denial-of-service (DoS) attack is a deliberate attempt to make a website or application unavailable to users.
Distributed Denial-of-Service Attacks
Now, suppose that the prankster has enlisted the help of friends.
The prankster and their friends repeatedly call the coffee shop with requests to place orders, even though they do not intend to pick them up. These requests are coming in from different phone numbers, and it’s impossible for the coffee shop to block them all. Additionally, the influx of calls has made it increasingly difficult for customers to be able to get their calls through. This is similar to a distributed denial-of-service attack.
AWS Shield Standard:
AWS Shield Standard automatically protects all AWS customers at no cost. It protects your AWS resources from the most common, frequently occurring types of DDoS attack.
As network traffic comes into your applications, AWS Shield Standard uses a variety of analysis techniques to detect malicious traffic in real time and automatically mitigates it.
AWS Shield Advanced:
AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks.
It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing custom rules to mitigate complex DDoS attacks.
--------------------------------------------------------------------------------------
Amazon CloudWatch
Amazon CloudWatch is a web service that enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics.
CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how performance has changed over time.
CloudWatch Alarms
With CloudWatch, you can create alarms that automatically perform actions if the value of your metric has gone above or below a predefined threshold.
AWS CloudTrail
AWS CloudTrail records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more. You can think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind them.
---------------------------------------------------------------------------
AWS Free Tier
Always Free
These offers do not expire and are available to all AWS customers.
For example, AWS Lambda allows 1 million free requests and up to 3.2 million seconds of compute time per month. Amazon DynamoDB allows 25 GB of free storage per month.
12 Months Free
These offers are free for 12 months following your initial sign-up date to AWS.
Examples include specific amounts of Amazon S3 Standard Storage, thresholds for monthly hours of Amazon EC2 compute time, and amounts of Amazon CloudFront data transfer out.
Trials
Short-term free trial offers start from the date you activate a particular service. The length of each trial might vary by number of days or the amount of usage in the service.
For example, Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that enables you to run virtual private servers) offers 750 free hours of usage over a 30-day period.
----------------------------------------------------------------------------------------------
AWS Pricing Calculator
The AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can organize your AWS estimates by groups that you define. A group can reflect how your company is organized, such as providing estimates by cost center.
AWS Budgets
In AWS Budgets, you can create budgets to plan your service usage, service costs, and instance reservations.
The information in AWS Budgets updates three times a day. This helps you to accurately determine how close your usage is to your budgeted amounts or to the AWS Free Tier limits.
In AWS Budgets, you can also set custom alerts when your usage exceeds (or is forecasted to exceed) the budgeted amount.
AWS Cost Explorer
AWS Cost Explorer is a tool that enables you to visualize, understand, and manage your AWS costs and usage over time.
AWS Cost Explorer includes a default report of the costs and usage for your top five cost-accruing AWS services. You can apply custom filters and groups to analyze your data. For example, you can view resource usage at the hourly level.
----------------------------------------------------------------------------------------------------------------
AWS Support Plans
Basic Support is free for all AWS customers. It includes access to whitepapers, documentation, and support communities. With Basic Support, you can also contact AWS for billing questions and service limit increases
Developer, Business, Enterprise On-Ramp, and Enterprise Support
The Developer, Business, Enterprise On-Ramp, and Enterprise Support plans include all the benefits of Basic Support, in addition to the ability to open an unrestricted number of technical support cases. These three Support plans have pay-by-the-month pricing and require no long-term contracts.
Developer Support
Customers in the Developer Support plan have access to features such as:
Best practice guidance
Client-side diagnostic tools
Building-block architecture support, which consists of guidance for how to use AWS offerings, features, and services together
Business Support
Customers with a Business Support plan have access to additional features, including:
Use-case guidance to identify AWS offerings, features, and services that can best support your specific needs
All AWS Trusted Advisor checks
Limited support for third-party software, such as common operating systems and application stack components
Enterprise On-Ramp Support
In November 2021, AWS opened enrollment into AWS Enterprise On-Ramp Support plan. In addition to all the features included in the Basic, Developer, and Business Support plans, customers with an Enterprise On-Ramp Support plan have access to:
A pool of Technical Account Managers to provide proactive guidance and coordinate access to programs and AWS experts
A Cost Optimization workshop (one per year)
A Concierge support team for billing and account assistance
Tools to monitor costs and performance through Trusted Advisor and Health API/Dashboar
Enterprise Support
In addition to all features included in the Basic, Developer, Business, and Enterprise On-Ramp support plans, customers with Enterprise Support have access to:
A designated Technical Account Manager to provide proactive guidance and coordinate access to programs and AWS experts
A Concierge support team for billing and account assistance
Operations Reviews and tools to monitor health
Training and Game Days to drive innovation
Tools to monitor costs and performance through Trusted Advisor and Health API/Dashboard
-------------------------------------------------------------------------------------------------
AWS Marketplace
---------------
AWS Marketplace is a digital catalog that includes thousands of software listings from independent software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS.
For each listing in AWS Marketplace, you can access detailed information on pricing options, available support, and reviews from other AWS customers.
-------------------------------------------------------------------------
AWS Cloud Adoption Framework (AWS CAF)
Six core perspectives of the Cloud Adoption Framework:
--------------------------------------------------
At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into six areas of focus, called Perspectives. Each Perspective addresses distinct responsibilities. The planning process helps the right people across the organization prepare for the changes ahead.
In general, the Business, People, and Governance Perspectives focus on business capabilities, whereas the Platform, Security, and Operations Perspectives focus on technical capabilities.
=============================================================================================================================================
Migration Strategies:
----------------------------
6 Strategies for Migration
Rehosting
----------
Rehosting also known as “lift-and-shift” involves moving applications without changes.
In the scenario of a large legacy migration, in which the company is looking to implement its migration and scale quickly to meet a business case, the majority of applications are rehosted.
Replatforming
--------------
Replatforming, also known as “lift, tinker, and shift,” involves making a few cloud optimizations to realize a tangible benefit. Optimization is achieved without changing the core architecture of the application.
Refactoring/re-architecting
----------------------------
Refactoring (also known as re-architecting) involves reimagining how an application is architected and developed by using cloud-native features. Refactoring is driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment.
Repurchasing:
-------------
Repurchasing involves moving from a traditional license to a software-as-a-service model.
For example, a business might choose to implement the repurchasing strategy by migrating from a customer relationship management (CRM) system to Salesforce.com.
Retaining:
----------
Retaining consists of keeping applications that are critical for the business in the source environment. This might include applications that require major refactoring before they can be migrated, or, work that can be postponed until a later time.
Retiring:
-----------
Retiring is the process of removing applications that are no longer needed.
===========================================================================================================
AWS Snow Family
The AWS Snow Family is a collection of physical devices that help to physically transport up to exabytes of data into and out of AWS.
AWS Snow Family is composed of AWS Snowcone, AWS Snowball, and AWS Snowmobile.
---------------------------------------------
The AWS Well-Architected Framework:
-------------------------------------
The AWS Well-Architected Framework helps you understand how to design and operate reliable, secure, efficient, and cost-effective systems in the AWS Cloud. It provides a way for you to consistently measure your architecture against best practices and design principles and identify areas for improvement.
The Well-Architected Framework is based on six pillars:
Operational excellence
Security
Reliability
Performance efficiency
Cost optimization
Sustainability
==================================================================================================================================================================================================================================================================================================================================