Amazonian puppeteer

Amazonian puppeteerWarrenn EnslinBlockedUnblockFollowFollowingJun 17Adventures of getting puppeteer working in AWS Lambdahttps://www.

thegef.

org/sites/default/files/Amazon_870.

jpgIntroductionI am the most forgetful person I know.

It comes as no surprise then that when it comes time to submit my timesheets at the end of the week I seem to perpetually forget.

Well I decided to take matters into my own hand and automate the whole darn thing.

The idea was to screen scrape my time recording application as they don’t provide an API.

Then to take that information and submit it to another timesheet application (also no API).

I mention these details only as a preface to justify the decisions around the technology.

What it basically boils down to is this I need to automate a weekly submission so I need some sort of cron job.

It is going to screen scrape data that is dynamically generated from javascript so I need a headless browser.

I am using my username and password so I need a secrets store of some sort.

The choice in tech stack AWS Lambda, CloudWatch cron event, AWS Secrets Manager and Puppeteer.

I’m not going into detail on how I got my final solution working but rather I will show you my working implementation of the tech stack.

RequirementsWell before we can really get started we need a couple of things before hand.

AWS account: It goes without saying that if you are going to use AWS you need to have an account.

As far as we can we will make use of the free tier services.

https://aws.

amazon.

com/premiumsupport/knowledge-center/create-and-activate-aws-account/Install docker: It is quite interesting the development approach that AWS advocates.

In order to simulate the Lambda runtime environment there are a number of Docker images made available by AWS for developers to use which means we need Docker to use them.

For Mac https://aws.

amazon.

com/premiumsupport/knowledge-center/create-and-activate-aws-account/ and for Windows https://docs.

docker.

com/docker-for-windows/install/.

The version I used was Docker Desktop for Mac 2.

0.

0.

3.

Install Node JS: Since the application is going to use Node JS it makes a world of sense to make sure you have installed the latest version of Node JS and NPM.

The version I used Node v11.

9.

0 and NPM v 6.

9.

0.

https://nodejs.

org/en/download/Install AWS Command Line Interface: We are going to use the AWS Server-less Application Model to create and submit the lambda functions which in-turn relies on the AWS Cli.

What this means is that Python and PIP are also going to be needed to be installed.

https://docs.

aws.

amazon.

com/cli/latest/userguide/cli-chap-install.

html to get the version you are using you can type: “aws –version” which for me was: aws-cli/1.

16.

99 Python/2.

7.

15 Darwin/18.

6.

0 botocore/1.

12.

89Install the SAM Command Line Interface: I decided to make use of the Serverless Application Model to submit and manage the lambda functions as it keeps everything in the AWS stack and I can leverage my familiarity with CloudFormation templates.

https://docs.

aws.

amazon.

com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.

html for me this was SAM CLI, version 0.

16.

1.

Lay of the land ** TL;DR warning **Before I get stuck into the process I want to give a brief introduction to the tech and concepts I used.

I feel it’s fair for those who don’t know or forgotten and need reminding.

Fair warning though this is the TL;DR section you can skip this if you are already familiar with this stuff.

Lambda: Lambda is an AWS server-less product offering.

It works on a you pay for what you use basis.

You basically create and upload the function you expect to be executed and specify how it should be dispatched, but there is no certainty on the actual hardware that it will be running on.

Lambda is priced on a GB-seconds basis, meaning that you pay for number of Gigabytes that was provisioned for the duration of each Lambda function whether those Gigabytes were used or not.

Lambda can run on a number of runtime environments NodeJS, Python, .

net etc.

Like most AWS services its connected to the AWS infrastructure.

https://aws.

amazon.

com/lambda/CloudFormation: CloudFormation is how AWS specifies its infrastructure as code.

It uses a yaml or json format to declaratively define which resources and infrastructure to provision.

With CloudFormation in AWS you can define the complete stack of resources that you intend to use.

It allows resources to be provisioned in an idempotent way.

It makes use template references, resource references and parameters.

https://aws.

amazon.

com/cloudformation/SAM template: SAM templates are an extension of CloudFormation which means you get to use the full suite of resources.

It comes with some simplified constructs specifically for defining server-less applications.

https://docs.

aws.

amazon.

com/serverless-application-model/latest/developerguide/serverless-sam-template-basics.

htmlCron expression: Cron expression owes its origin to a utility in unix systems that enable users to schedule tasks to run periodically at a specified date/time.

The cron syntax represents an expression that describes when an event will execute.

It typically consists of six required fields with <year> being optional (https://www.

baeldung.

com/cron-expressions):<second> <minute> <hour> <day-of-month> <month> <day-of-week> <year>Chrome DevTools Protocol: This is the protocol that allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers.

The protocol gives a standardised approach for integration and debugging browsers https://chromedevtools.

github.

io/devtools-protocol/Chromium: Chromium is an open-source browser project that forms the basis for the Chrome web browser without the extra stuff like automatic updates and support for additional video formats.

Given that Chromium supports the Chrome DevTools Protocol it makes for a perfect candidate for a headless browser.

A headless browser is basically a web browser without a graphical interface.

https://www.

howtogeek.

com/202825/what%E2%80%99s-the-difference-between-chromium-and-chrome/Puppeteer: Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol.

Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

In this article I make use of the puppeteer-core which is the same api but doesn’t download the chromium browser by default.

https://github.

com/GoogleChrome/puppeteerAWS Secrets Manager: This is a service provided by AWS to help manage protected secrets.

It allows us to store sensitive information in AWS using certificates.

With Secrets Manager we can leave the management details like encryption and certificate rotation up to AWS.

https://aws.

amazon.

com/secrets-manager/The adventureOnce I got everything installed and configured I took a read over the SAM documentation and it looks like to get everything started is a simple sam init.

A couple of caveats here though first is that you need to target the nodejs8.

10 runtime as that is the required runtime for the chrome-aws-lambda package.

Second is you probably don’t want to name your working folder as hello-world so its a good idea to give it a name.

sam init -r nodejs8.

10 -n autoweb-r is short for the runtime-n is short for the nameyou can get help by using “sam init –help”The generated readme does a pretty good job of explaining everything that is going on so I am not going to expand on that much further.

The links in the default template.

yaml file are also very good so it’s a good idea to make note of them and save them to your favourites for later.

I will list them below just incase.

More info about Globals: https://github.

com/awslabs/serverless-application-model/blob/master/docs/globals.

rstMore info about Function Resource: https://github.

com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.

md#awsserverlessfunctionMore info about API Event Source: https://github.

com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.

md#apiServerlessRestApi is an implicit API created out of Events key under Serverless::Function.

Find out more about other implicit resources you can reference within SAM: https://github.

com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.

rst#apiIt’s great having a starter template with defaults and all but we have a folder that says hello-world.

We are also not using an api endpoint at least not for this example so let’s fix that.

In the template.

yaml make a change to the Resources section just below the Globals.

Resources: AutoWebFunction: Type: AWS::Serverless::Function Properties: CodeUri: auto-web/ Handler: app.

lambdaHandler Runtime: nodejs8.

10Outputs: AutoWebFunction: Description: "Hello World Lambda Function ARN" Value: !GetAtt AutoWebFunction.

Arn AutoWebFunctionIamRole: Description: "Implicit IAM Role created" Value: !GetAtt AutoWebFunctionRole.

ArnIn CloudFormation templates you can specify the name of your resources you are using directly underneath the Resources section.

In our case we no longer want to call it HelloWorldFunction so instead let’s call it AutoWebFunction.

The CodeUri property specifies the folder location of the lambda function in this case its the folder location is relative to the template.

yaml file which we want to be auto-web/.

The Events section we can remove for now as we are not using an API endpoint.

Since the Resources section has changed we need to reflect the new reference values in the Output section.

Lastly the Api output can also be removed as we are not using an API for this example.

To rename the folder we can use the mv bash command.

mv hello-world auto-webLooking good but let’s get something going.

A quick look back at the documents again and it looks like the command to run is sam local.

Since we are not using an API we need to specify the invoke parameter with the function name we want to invoke in this case AutoWebFunction.

sam local invoke AutoWebFunctionDarn it’s prompting me.

If I look closely at the prompt it says theres another option to use an input file instead.

If I look at the files and folders I notice an event.

json file in the same folder.

Okay now we got something to use.

sam local invoke AutoWebFunction –event event.

jsonWell I can run it but how do I debug it.

Taking a look the documentation again we have a rundown of how to enable debugging.

I am using VSCode so I need to create a custom launch configuration and chose a port number.

To get there I select the Debug Menu > Add Configuration… VSCode which would by default create a .

/.

vscode/launch.

json file in the working folder.

Here is what my launch.

json looks like:{"version": "0.

2.

0","configurations": [ { "type": "node", "request": "attach", "name": "Attach to AutoWeb", "address": "localhost", "port": 9229, "localRoot": "${workspaceFolder}/auto-web", "remoteRoot": "/var/task", "protocol": "inspector", "stopOnEntry": false }]}To debug you need to first start the debugger and have it listen to the port you specified in the launch.

json file in this case 9229.

sam local invoke AutoWebFunction –event event.

json -p 9229When that is running and listening then you can launch your debugger from VSCode using the launch configuration you just created.

Don’t forget to put a break point in the .

/auto-web/app.

js file.

We got the function and can debug I think now is a good time to get into the details of the function.

Before we go on I feel its important to point out a very important detail about Lambda and that’s the restriction on the deployment package size basically your package needs to be maximum of 50 MB zipped and 250 MB unzipped.

Why I mention this now is that I made the mistake of just installing Puppeteer.

When I deployed or tested I couldn’t figure out what was wrong it was breaking in unexpected ways freezing on the first await and so on.

When I investigated further I found that by default Puppeteer installs a version of Chromium with a size of ~170 MB.

As you can see this wasn’t going to be a viable option for the Lambda function.

Fortunately I discovered chrome-aws-lambda which still comes in at a hefty 33.

21 MB but we can still make this work.

So instead of Puppeteer we can install chrome-aws-lambda and puppeteer-core which is the same API as Puppeteer only without the default download of Chromium.

Change directory from the working folder to the sub folder auto-web.

Then npm install chrome-aws-lambda and puppeteer-corecd auto-webnpm install chrome-aws-lambda puppeteer-coreNow armed with a debuggable and deployable solution time to get into some code.

The documentation for puppeteer is pretty good so if you need any help you will probably find what you need at https://devdocs.

io/puppeteer/.

Since Puppeteer is not much more than a high-level api over the DevTools Protocol we are going to need an instance of the Chromium browser running.

Once the browser is up and running Puppeteer can then establish a Web Socket connection to its host and start exchanging messages.

In the app.

js file:const chromium = require('chrome-aws-lambda')let browser = nulllet page = null.

.

exports.

lambdaHandler = async (event, context) => {try { browser = await chromium.

puppeteer.

launch({ args: chromium.

args, defaultViewport: chromium.

defaultViewport, executablePath: await chromium.

executablePath, headless: chromium.

headless }).

.

The chromium.

puppeteer.

launch function returns a Promise to the browser object.

It basically starts up the browser and attaches puppeteer to the running instance via websockets.

With the browser object in hand we can now create a page object from the browser.

page = await browser.

newPage()The page object gives us access to the functions we need like browsing to a url or getting a selector from the generated html on the page.

We can now use the page object to interact with the browser.

const navigationPromise = page.

waitForNavigation()await page.

goto("https://www.

example.

com", { waitUntil: 'networkidle2' })await page.

setViewport({ width: 1920, height: 1001 })await navigationPromiseOn the first line here we are using the waitForNavigation function which returns a Promise to the main resource response.

This promise resolves when the page navigates to a new URL or reloads.

It’s useful to when we want to block execution to ensure the page is loaded before we continue.

The next line is goto function this will navigate to the specified url in this case “https://www.

example.

com”.

The object with the waitUntil property is basically to tell page when to consider navigation succeeded.

In this case we consider navigation to be finished when there are no more than 2 network connections for at least 500 ms.

The result is a Promise much the same as waitForNavigation which allows us to block execution until the page has completed its navigation.

The setViewport call just sets the viewports expected width and height of the page.

After which we block await for the navigationPromise which will just doubly make sure we have successfully navigated to the page we are after.

Now that we are on the page we can do some screen scrapping and get some values from the webpage.

const footerleft = await page.

waitForSelector('body > div > p:nth-child(2)')const textContent = await (await footerleft.

getProperty('textContent')).

jsonValue()The page object has an extremely useful function the waitForSelector.

This function allows you to specify a selector string (think jquery) and returns a Promise to an ElementHandle specified by the selector.

It will do as the name implies wait for that selector to appear on the page.

You need to be careful with this though if the element you are trying to get is not on the page this will only timeout after 30 seconds by default which is 30 seconds of Lambda time and that could get expensive.

At this point what we have is an ElementHandle to get any property values from this we need to use the getProperty function this returns a Promise to a JSHandle.

Not quite the string value we are after so we need to do one more trick and that is jsonValue which will give us a Promise to the string value that we are after.

We got code that can now browse and screen scrape other useful function are page.

type and page.

click which let you do things like type text in text boxes or click buttons specified by a selector.

It’s always a good idea to make sure your function can gracefully exit so its a good practice to make sure you close the page and disconnect the browser.

We can do that in the finally block to make sure we call the those functions despite any exceptions we might encounter.

.

.

exports.

lambdaHandler = async (event, context) => {try {.

.

} catch (err) { console.

log(err) return err } finally { if (page) { await page.

close() } if (browser) { await browser.

disconnect() }}The code should look something like this:const chromium = require('chrome-aws-lambda')let browser = nulllet page = nullexports.

lambdaHandler = async (event, context) => { try { browser = await chromium.

puppeteer.

launch({ args: chromium.

args, defaultViewport: chromium.

defaultViewport, executablePath: await chromium.

executablePath, headless: chromium.

headless }) page = await browser.

newPage() const navigationPromise = page.

waitForNavigation() await page.

goto("https://www.

example.

com", { waitUntil: 'networkidle2' }) await page.

setViewport({ width: 1920, height: 1001 }) await navigationPromise const footerleft = await page.

waitForSelector('body > div > p:nth-child(2)') const textContent = await (await footerleft.

getProperty('textContent')).

jsonValue() return { textContent } } catch (err) { console.

log(err) return err } finally { if (page) { await page.

close() } if (browser) { await browser.

disconnect() } }}At this point we can verify that everything is working by stepping through the code.

What we are still missing however is a way to automatically kick it off at a specified time.

This is where cron expressions come into play so we are going to revisit that SAM template and give a cron event.

Initially I had some difficulty here as a regular cron expression includes 6 required field the seventh year being optional.

AWS cron expression however don’t seem to include the seconds so we need to keep that in mind.

Let’s add the Events section to the AutoWebFunction in our template.

yaml.

Resources: AutoWebFunction: Type: AWS::Serverless::Function Properties: CodeUri: auto-web/ Handler: app.

lambdaHandler Runtime: nodejs8.

10 Events: CronEvent: Type: Schedule Properties: Schedule: cron(0 17 ? * FRI *)The cron expression here will basically fire every Friday at around 5 pm.

Fortunately SAM templates are an extension of CloudFormation templates so we can easily include a new resource as well as add parameters if we need to and I would prefer the cron schedule to be something that can be specified as a parameter.

So let’s add a new parameter that we can use to specify the cron schedule instead of hard coding it.

Just above the Resources section we can add a Parameters section which will allow us to specify values when CloudFormation kicks off the provisioning of the stack.

Parameters: CronExpression: Type: String Default: cron(0 17 ? * FRI *)Resources: AutoWebFunction: Type: AWS::Serverless::Function Properties: CodeUri: auto-web Handler: app.

lambdaHandler Runtime: nodejs8.

10 Events: CronEvent: Type: Schedule Properties: Schedule: Ref: CronExpressionThe last part now is secrets which means it’s time to add the SecretsManager resource.

As I mentioned earlier the SAM template is really an extension of the CloudFormation template so adding a SecretsManager resource would be done the same way as you would normally in CloudFormation.

The SecretsManager has a convention of saving data as JSON object so if we are going to save the username and password we should use a JSON string to do so.

So we probably want a string that looks something like.

SecretString: '{"username":"user","password":"secret"}'I think the username and password isn’t something that I want fixed in the template either so I would prefer to take them on as parameters as well.

Parameters: CronExpression: Type: String Default: cron(0 17 ? * FRI *) Password: NoEcho: true Type: String Default: password UserName: NoEcho: true Type: String Default: usernameIn the parameters I use the NoEcho: true what this does is it lets AWS know that when prompted for values to supply to the template it should mask those values with asterisks (*).

It is useful to mask secrets like passwords.

To add the SecretsManager resource now we can add another resource just below the AutoWebFunction.

We are going to give it the name of “autowebsecrets” this will be important later.

Resources:# the AutoWebFunction goes here AutoWebSecrets: Type: AWS::SecretsManager::Secret Properties: Name: autowebsecrets SecretString: Fn::Join: – '' – – '{"username":"' – Ref: UserName – '","password":"' – Ref: Password – '"}'In the template we use a CloudFormation function Fn::Join.

This is a CloudFormation function that allows us to appends a set of values into a single value, separated by the specified delimiter.

If a delimiter is the empty string, the set of values are concatenated with no delimiter.

In this case we are just using it to do simple string concatenation in order to form a valid JSON value for the SecretString.

An important thing to note however is that the role the function is running under requires the rights to read the secrets.

Fortunately in AWS SAM there is are specially built in policy templates for this kind of thing and there is one for this very purpose the AWSSecretsManagerGetSecretValuePolicy.

We need to now change the resource declaration for the AutoWebFunction to include this policy declaration.

Resources: AutoWebFunction: Type: AWS::Serverless::Function Properties: Policies: – AWSSecretsManagerGetSecretValuePolicy: SecretArn: !Ref AutoWebSecrets CodeUri: auto-web Handler: app.

lambdaHandler Runtime: nodejs8.

10 Events: CronEvent: Type: Schedule Properties: Schedule: Ref: CronExpression AutoWebSecrets: Type: AWS::SecretsManager::Secret Properties: Name: autowebsecrets SecretString: Fn::Join: – '' – – '{"username":"' – Ref: UserName – '","password":"' – Ref: Password – '"}'With the template provisioning the secrets and granting the role the function runs as enough rights to read the secrets we can now code the integration to AWS SecretsManager.

To do se we need to add the aws-sdk-js library.

So back to the auto-web subfolder in the working directory and install the aws-sdk npm library.

cd auto-webnpm install aws-sdk The aws-sdk gives us access to most of the resources we might need in AWS including the SecretsManager.

To get access to the SecretsManager we must first create a client.

Once we have the client we can then use the name value we specified in the template earlier as the SecretId in our case this is “autowebsecrets”.

Important to note is that the AWS.

SecretsManager requires a region make sure this region is the region you are deploying your SAM template to, as that’s the region you will be using to provision your resources.

In the .

/auto-web/app.

js file:.

const AWS = require('aws-sdk')const client = new AWS.

SecretsManager({ region: 'eu-west-1' }).

.

The client object has the getSecretValue function that we can use to get the secret from the secrets manager.

This function doesn’t return a Promise rather it takes in as the second parameter the callback function that you can expect to be called when the call is complete.

The callback function takes in two parameters the first is the error object if the call failed and the second is the data object which contains an important property SecretString this is the decrypted JSON value as a string just like the one we specified in our template.

Given the awkwardness of using the callback function I decided to create a helper function to allow me to continue to use the await async by using the Promise object.

function getSecret(secretId) { return new Promise((resolve, reject) => { client.

getSecretValue({ SecretId: secretId }, (err, data) => { if (err) { reject(err) return } try { let result = JSON.

parse(data.

SecretString) resolve(result) } catch (e) { reject(e) } }) })};Basically what is happening here is that this function returns a Promise object which will allow me to call await in an async function we reject if there is an error from callback or if an exception is thrown otherwise we grab the SecretString from data and JSON.

Parse (as the SecretString will be in JSON format).

Using the parsed object we return that as the resolve of the Promise.

This allows me to await the result in the async function.

.

.

exports.

lambdaHandler = async (event, context) => {try { const autowebsecrets = await getSecret('autowebsecrets') const username = autowebsecrets.

username const password = autowebsecrets.

password.

.

As I mentioned earlier the secretId in would need to be the name of the SecretsManager Secret that we provisioned in the template in our case “autowebsecrets”.

The final script should look like this:const chromium = require('chrome-aws-lambda')const AWS = require('aws-sdk')const client = new AWS.

SecretsManager({ region: 'eu-west-1' })let browser = nulllet page = nullfunction getSecret(secretId) { return new Promise((resolve, reject) => { client.

getSecretValue({ SecretId: secretId }, (err, data) => { if (err) { reject(err) return } try { let result = JSON.

parse(data.

SecretString) resolve(result) } catch (e) { reject(e) } }) })};exports.

lambdaHandler = async (event, context) => { try { const autowebsecrets = await getSecret('autowebsecrets') const username = autowebsecrets.

username const password = autowebsecrets.

password browser = await chromium.

puppeteer.

launch({ args: chromium.

args, defaultViewport: chromium.

defaultViewport, executablePath: await chromium.

executablePath, headless: chromium.

headless }) page = await browser.

newPage() const navigationPromise = page.

waitForNavigation() await page.

goto("https://www.

example.

com", { waitUntil: 'networkidle2' }) await page.

setViewport({ width: 1920, height: 1001 }) await navigationPromise const footerleft = await page.

waitForSelector('body > div > p:nth-child(2)') const textContent = await (await footerleft.

getProperty('textContent')).

jsonValue() return { username, password, textContent } } catch (err) { console.

log(err) return err } finally { if (page) { await page.

close() } if (browser) { await browser.

disconnect() } }}With all this hard work we are finally ready to deploy.

Before we deploy its important to note that even though we are using the chrome-aws-lambda browser when running a simple example page can easily take up ~430 MB of running memory.

So if we expect our function to work we are going to have to provision enough memory for our lambda function to run.

Which is why we add a MemorySize to the Globals section in the template.

While we are in the Globals section I am also going to increase the Timeout to 900 seconds this is the highest value that can be given as the maximum timeout for a lambda is 15 minutes.

This should give us enough time to run the function to completion just incase the web page is slow for whatever reason.

Globals: Function: Timeout: 900 MemorySize: 512To deploy we need to first create the S3 bucket.

We need a `S3 bucket` where we can upload our Lambda functions packaged as ZIP before we deploy anything.

Lambda will basically extract that package into a container then execute the function specified in our template.

So using the AWS cli:aws s3 mb s3://BUCKET_NAMEWhere BUCKET_NAME is a unique bucket name for your account.

To package the solution we use the sam package command.

sam package –template-file template.

yaml –s3-bucket BUCKET_NAME –output-template-file template-out.

yamlAgain BUCKET_NAME is your unique bucket for your account.

The sam package command has basically zipped the contents of the folder you specified in the CodeUri folder for the function in the template, it gave that zip a unique name and uploaded it to S3.

If you look carefully at the template-out.

yaml file you will notice that the CodeUri has a different value from the one in template.

yaml it now has a valid s3://BUCKET_NAME/randomnumber url.

The sam package command was smart enough to automatically figure out what the right value for this property should be, making sure your template had the right values for CloudFormation to use.

With the code uploaded and the template updated we can deploy the application to AWS CloudFormation.

The template file to use should be the one outputted from the sam package command (template-out.

yaml).

The stack name can be anything you like as long as its sticks to the stack name convention.

Your region needs to be the specific region where you want your stack to be deployed and if you remember earlier with the SecretsManager code you specified a region make sure this value is the same value or your code is not going to work.

The capabilities is list of capabilities that you must specify before AWS CloudFormation can create certain stacks.

As this template is going to provision IAM Roles on your behalf it means that it will affect the security of your account so you need to specify CAPABILITY_IAM to basically give it permission to do so.

sam deploy –template-file template-out.

yaml –stack-name auto-web –capabilities CAPABILITY_IAM –region eu-west-1Now our function is done and should execute on cron schedule.

If you need to change the parameter values you can simple change the CloudFormation stack and kick off the provisioning again.

If you need to test you can go to the Lambda section and kick off a function it should use the values that you specified as default in the template.

ConclusionsWhen I first undertook this I figured this was going to be something that was simple and straight forward.

Very quickly I discovered my unknown unknowns.

For me this underscores the important lesson of actually diving in and doing, until then you don’t really know the effort involved.

Here is a link to my code in Github for anyone interested.

.

. More details

Leave a Reply