A serverless application is a type of software architecture in which the application is built and run without the developer managing the underlying server infrastructure. Instead, the cloud service provider automatically handles server management tasks such as provisioning, scaling, and maintenance.
Using AWS Lambda for deployment of serverless applications has several advantages, some of which are as follows:
Reduced cost: Lambda is a serverless service, so there are no costs for idle time and we are only charged for the actual compute time consumed by our function.
Scalability: Lambda scales automatically based on the number of requests made to it. This removes the need for manual scaling to handle traffic spikes in traffic without manual intervention.
Fully managed: Lambda is a fully managed service, so we don’t have to worry about infrastructure management-related tasks.
Now that we know the advantages of using Lambda to host serverless applications, let’s deploy one using it. We’ll deploy a simple storage application using which we’ll be able to perform CRUD operations on a DynamoDB table. Here’s the infrastructure that we’ll provision:
We’ll start with creating the resources the application and Lambda will use.
We’ll require an IAM role because Lambda mostly uses other AWS services, and AWS services are required to assume an IAM role to use other AWS services. The command below creates an IAM role for our Lambda function.
Enter your AWS Access_Key_ID
and Secret_Access_Key
in the widget below before running any commands. If you don’t have these keys, follow this Answer to generate them “How to generate AWS access keys.”
Note: The IAM user whose credentials are being used must have the permission to perform all the required actions.
aws iam create-role \--role-name LambdaExecutionRole \--assume-role-policy-document '{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"Service": ["lambda.amazonaws.com","apigateway.amazonaws.com"]},"Action": "sts:AssumeRole"}]}' \--region us-east-1
In this command, the argument assume-role-policy-document
specifies the services that will be allowed to assume this role. You’ll get detailed information about the newly created role. Copy the value of the ARN from the output and store it using the playground below in the iam_role_arn
field, similar to how you entered your AWS access keys.
Now that the role has been created, let’s add the required permissions with it using the put-role-policy
command.
aws iam put-role-policy \--role-name LambdaExecutionRole \--policy-name LambdaExecutionPolicy \--policy-document '{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Action": ["dynamodb:PutItem","dynamodb:GetItem","dynamodb:UpdateItem","dynamodb:DeleteItem","dynamodb:Scan"],"Resource": "arn:aws:dynamodb:us-east-1:*:table/Storage"},{"Effect": "Allow","Action": ["lambda:InvokeFunction"],"Resource": ["arn:aws:lambda:us-east-1:*:function:ApplicationServer","arn:aws:lambda:us-east-1:*:function:ApplicationServer:$LATEST"]},{"Effect": "Allow","Action": ["logs:CreateLogGroup","logs:CreateLogStream","logs:PutLogEvents"],"Resource": "*"}]}' \--region us-east-1
The argument policy-document
specifies the permissions that this role will have. We will use the Lambda function to perform CRUD operations on a DynamoDB table, so this IAM role, which will be assumed by the Lambda function, has been provided the necessary permissions as specified in lines 10–14.
Next, we’ll create a DynamoDB table, which will be used by our application as a storage. All our CRUD requests will be directed to this table. The command below creates a DynamoDB table:
aws dynamodb create-table \--table-name Storage \--attribute-definitions AttributeName=id,AttributeType=S \--key-schema AttributeName=id,KeyType=HASH \--billing-mode PROVISIONED \--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
This table will have only one attribute, id
of type string
.
Now, we’ll create a Lambda function that will be used to host our application. We’ll also need to upload the application code to the Lambda function during its creation. The widget below contains the code of the application. This application is used to perform basic CRUD operations on a DynamoDB table. We’ll use it to perform CRUD operations on the DynamoDB table that we created earlier.
This code will be uploaded to the Lambda function as a zipped file. Click the “Run’’ button, and this code will be zipped to a file titled deployment_
package.zip
. After that, the script in main.sh
file will execute. This script contains the command to create a Lambda function with eployment_
package.zip
as its deployment package.
aws lambda create-function \ --function-name ApplicationServer \ --runtime nodejs20.x \ --role $iam_role_arn \ --handler lambda_function.handler \ --zip-file fileb://deployment_package.zip \ --region us-east-1
Our application checks the incoming HTTP method and delegates the operation to corresponding asynchronous functions (getItems
, createItem
, deleteItem
). These functions construct and execute DynamoDB queries based on the provided parameters, responding with appropriate status codes and messages.
Copy the ARN of the Lambda function from the response and store it in the lambda_function_arn
field in the playground below.
Now, the only thing remaining is to create and configure an API Gateway, which we’ll use to interact with the Lambda function and effectively with our application. Execute the command given below to create an API Gateway titled ServerlessAppAPI
.
aws apigateway create-rest-api --name ServerlessAppAPI
From the output, save the id
in the api_id
field and rootResourceId
in the parent_id
field, using the widget below.
The first part of configuring the created API Gateway is to create a resource in it. Resources shape up the URL of your API. We can use it to organize and structure our API to have multiple URLs based on different functionalities. The command below creates a resource in the API Gateway we created. Execute this command to create a resource myapp
.
aws apigateway create-resource --rest-api-id $api_id --parent-id $parent_id --path-part 'myapp'
Note down the id
from the response and save it in the resource_id
field in the widget below. It will be used in the next steps. We can have multiple resources based on our requirements. These resources become part of the URI that we can use to call the API.
Now, we’ll create a method that will be used to call the API. We’ll create ANY
method which accommodates all types of HTTP requests. Execute the command given below to create ANY
method within the resource we created earlier.
aws apigateway put-method --rest-api-id $api_id \--region us-east-1 \--resource-id $resource_id \--http-method ANY \--authorization-type "NONE"
We have configured our API as per our requirements. It has the path structure we want and accepts all HTTP requests. Our next step is to integrate our API with our Lambda function. To do that, we’ll use the put-integration
command. Click the “Run’’ button to integrate our Lambda function with our API.
aws apigateway put-integration \--region us-east-1 \--rest-api-id $api_id \--resource-id $resource_id \--http-method ANY \--type AWS_PROXY \--integration-http-method POST \--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/$lambda_function_arn/invocations \--credentials $iam_role_arn \--request-templates '{"application/json": "{\n \"httpMethod\": \"$context.httpMethod\",\n \"body\": $input.body\n}"}'
The request-templates
argument contains the event data that will be sent to the Lambda function by the API when it’s invoked.
Now, finally, we’ll deploy the API to complete our configurations. Execute the command below to deploy the API.
aws apigateway create-deployment --rest-api-id $api_id --stage-name v1
Our infrastructure is now completely set up. All we need is the invoke URL of the API we created to test our infrastructure. Click the “Run’’ button below to get the invoke URL of the v1
deployment stage of our API.
echo "https://$api_id.execute-api.us-east-1.amazonaws.com/v1"
Note down the URL we get in response and store it somewhere safe. This is the invoke URL of our API.
Now, let’s test our infrastructure using the playground given below. Follow these steps to do so:
We’ll start with a POST
call to create an item in our DynamoDB table using our serverless application. Replace the <invoke-URL>
with the API URL we saved earlier. The details of the item we are creating will be sent as the body of the API call. After taking a look at the “Body’’ for this call, click the ‘‘Send’’ button to make the API call. We’ll get Item created successfully
message in response.
Now let’s check if the item we created earlier actually exists in our DynamoDB table by making a GET
call. Select “View Items” under “Collection”, replace the <invoke-URL>
with the invoke URL of our API and click the “Send” button to make a GET
call. We’ll get the item we saved earlier, as the response of this call indicating that our POST
and GET
calls are working.
The last thing to check for our application is whether the DELETE
call is working. Select “Delete Item” under “Collection”, replace the <invoke-URL>
with the invoke URL of our API and click the “Send’’ button to make a DELETE
call. The item to be deleted will be sent as the body parameter of this call. After taking a look at the “Body” for this call, click the “Send’’ button to make the API call. We’ll get Item deleted successfully
message in response. Now make another GET
call and you’ll get an empty response indicating that the DELETE
call actually deleted the specified item from the table.
Key | Value | Description | |
---|---|---|---|
Enter the URL and click Send to get a response
Our serverless application is now fully functional.
So, we’ve deployed a completely functional application with serverless architecture. Using the serverless architecture to deploy this application saves us from the overhead tasks of managing the servers for our application. This allows our team to focus on writing code and developing features rather than dealing with server provisioning, scaling, and maintenance. Additionally, serverless services can automatically scale to handle varying workloads, providing cost efficiency by only charging for actual usage.
Free Resources