dan-v/awslambdaproxy: An AWS Lambda powered … – GitHub
awslambdaproxy is an AWS Lambda powered HTTP/SOCKS web proxy. It provides a constantly rotating IP address for your network traffic from all regions where AWS Lambda is available. The goal is to obfuscate your traffic and make it harder to track you as a user.
Features
HTTP/HTTPS/SOCKS5 proxy protocols support (including authentication).
No special client side software required. Just configure your system to use a proxy.
Each configured AWS Lambda region provides a large pool of constantly rotating IP address.
Configurable IP rotation frequency between multiple regions.
Mostly AWS free tier compatible (see FAQ below).
Project status
Current code status: proof of concept. This is the first Go application that I’ve ever written. It has no tests. It may not work. It may blow up. Use at your own risk.
How it works
At a high level, awslambdaproxy proxies TCP/UDP traffic through AWS Lambda regional endpoints. To do this, awslambdaproxy is setup on a publicly accessible host (e. g. EC2 instance) and it handles creating Lambda resources that run a proxy server (gost). Since Lambda does not allow you to connect to bound ports in executing functions, a reverse SSH tunnel is established from the Lambda function to the host running awslambdaproxy. Once a tunnel connection is established, all user traffic is forwarded through this reverse tunnel to the proxy server. Lambda functions have a max execution time of 15 minutes, so there is a goroutine that continuously executes Lambda functions to ensure there is always a live tunnel in place. If multiple regions are specified, user traffic will be routed in a round robin fashion across these regions.
Installation
Terraform
Manual
Clone repository and go to the deployment/terraform directory:
git clone && cd awslambdaproxy/deployment/terraform
Install Terraform and configure your Terraform backend. Read more about Terraform backends here.
Create and fill in a variable definitions file (read more here) if you don’t want to use default variables values defined in
Run these commands to init and apply configuration:
terraform init && terraform apply -auto-approve
It will create all dependent resources and run awslambdaproxy inside a Docker container. EC2 instance SSH key can be found in AWS Secret Manager in your AWS Management Console.
NOTE: Some AWS regions have a big list of IP CIDR blocks and they can exceed the default limits of security groups (read more). In that case, you’ll need to make a limit increase request through the AWS Support Center by choosing Create Case and then choosing Service Limit Increase to prevent deployment issues.
Download a pre-built binary from the GitHub Releases page.
Copy awslambdaproxy binary to a publicly accessible linux host (e. EC2 instance, VPS instance, etc). You will need to open the following ports on this host:
Port 22 – functions executing in AWS Lambda will open SSH connections back to the host running awslambdaproxy, so this port needs to be open to the world. The SSH key used here is dynamically generated at startup and added to the running users authorized_keys file.
Port 8080 – the default configuration will start a HTTP/SOCKS proxy listener on this port with default user/password authentication. If you don’t want to publicly expose the proxy server, one option is to setup your own VPN server (e. dosxvpn or algo), connect to it, and just run awslambdaproxy with the proxy listener only on localhost (-l localhost:8080).
Optional, but I’d highly recommend taking a look at the Minimal IAM Policies section below. This will allow you to setup minimal permissions required to setup and run the project. Otherwise, if you don’t care about security you can always use an access key with full administrator privileges.
awslambdaproxy will need access to credentials for AWS in some form. This can be either through exporting environment variables (as shown below), shared credential file, or an IAM role if assigned to the instance you are running it on. See this for more details.
export AWS_ACCESS_KEY_ID=XXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=YYYYYYYYYYYYYYYYYYYYYY
Run awslambdaproxy setup.
Run awslambdaproxy run.. /awslambdaproxy run -r us-west-2, us-west-1, us-east-1, us-east-2
Configure your web browser (or OS) to use the HTTP/SOCKS5 proxy on the publicly accessible host running awslambdaproxy on port 8080.
Minimal IAM Policies
This assumes you have the AWS CLI setup with an admin user
Create a user with proper permissions needed to run the setup command. This user can be removed after running the setup command.
aws iam create-user –user-name awslambdaproxy-setup
aws iam put-user-policy –user-name awslambdaproxy-setup –policy-name awslambdaproxy-setup –policy-document filedeployment/iam/
aws iam create-access-key –user-name awslambdaproxy-setup
{
“AccessKey”: {
“UserName”: “awslambdaproxy-setup”,
“Status”: “Active”,
“CreateDate”: “2017-04-17T06:15:18. 858Z”,
“SecretAccessKey”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
“AccessKeyId”: “xxxxxxxxxxxxxx”}}
Create a user with proper permission needed to run the proxy.
aws iam create-user –user-name awslambdaproxy-run
aws iam put-user-policy –user-name awslambdaproxy-run –policy-name awslambdaproxy-run –policy-document filedeployment/iam/
aws iam create-access-key –user-name awslambdaproxy-run
“UserName”: “awslambdaproxy-run”,
“CreateDate”: “2017-04-17T06:18:27. 531Z”,
Examples
# execute proxy in four different regions with rotation happening every 60 seconds. /awslambdaproxy run -r us-west-2, us-west-1, us-east-1, us-east-2 -f 60s
# choose a different port and username/password for proxy and add another listener on localhost with no auth. /awslambdaproxy run -l “admin:[email protected]:8888, localhost:9090”
# bypass certain domains from using lambda proxy. /awslambdaproxy run -b “*., *. ”
# specify a dns server for the proxy server to use for dns lookups. /awslambdaproxy run -l “admin:[email protected]:8080? dns=1. 1. 1”
# increase function memory size for better network performance. /awslambdaproxy run -m 512
FAQ
Should I use awslambdaproxy? That’s up to you. Use at your own risk.
Why did you use AWS Lambda for this? The primary reason for using AWS Lambda in this project is the vast pool of IP addresses available that automatically rotate.
How big is the pool of available IP addresses? This I don’t know, but I do know I did not have a duplicate IP while running the proxy for a week.
Will this make me completely anonymous? No, absolutely not. The goal of this project is just to obfuscate your web traffic by rotating your IP address. All of your traffic is going through AWS which could be traced back to your account. You can also be tracked still with browser fingerprinting, etc. Your IP address may still leak due to WebRTC, Flash, etc.
How often will my external IP address change? I’m not positive as that’s specific to the internals of AWS Lambda, and that can change at any time. However, I’ll give an example, with 4 regions specified rotating every 5 minutes it resulted in around 15 unique IPs per hour.
How much does this cost? awslambdaproxy should be able to run mostly on the AWS free tier minus bandwidth costs. It can run on a instance and the default 128MB Lambda function that is constantly running should also fall in the free tier usage. The bandwidth is what will cost you money; you will pay for bandwidth usage for both EC2 and Lambda.
Why does my connection drop periodically? AWS Lambda functions can currently only execute for a maximum of 15 minutes. In order to maintain an ongoing proxy a new function is executed and all new traffic is cut over to it. Any ongoing connections to the previous Lambda function will hard stop after a timeout period. You generally won’t see any issues for normal web browsing as connections are very short lived, but for any long lived connections you will see issues. Consider using the –bypass flag to specify known domains that you know use persistent connections to avoid having your connection constantly dropping for these.
yurymkomarov – streamlined the entire deployment process with Terraform.
unixfox – contributed the Docker image for awslambdaproxy.
gost – A simple security tunnel written in Golang.
yamux – Golang connection multiplexing library.
goad – Code was borrowed from this project to handle AWS Lambda zip creation and function upload.
Build From Source
Install Go and go-bindata
Fetch the project with git clone:
git clone && cd awslambdaproxy
Run make to build awslambdaproxy. You’ll find your awslambdaproxy binary in the artifacts folder.
AWS Marketplace: Stealth ProxyBahn VPC4 Basic
Product Overview
A fast, turnkey, scalable and anonymizing HTTP private cloud proxy server that sets up in minutes: Great for Google SERP checking applications, distributed unique IP request threading, web application stress testing and obviously much more. Allows for anonymous internet browsing, scaling and cycling of public IPs requiring no instance configuration changes. Bind up to 4 IPs per micro instance in a 2 NIC VPC EC2 setup. Filter port access control using your EC2 Security Group, that’s it! Need more than 4 public IPs? Scale your proxy network in multiples of 4 EIPs or 1 dynamic IP per AMI instance till your hearts content. Don’t have knowledgeable Amazon AWS cloud infrastructure management skills or resources? Uses can be as simple as wanting to bypass online web surfing policies, to protecting your privacy at work and home. Or what I like best, power using proxies satisfying your SEO company’s daily client search engine rank reporting needs without getting blocked by Google, Yahoo and Bing. Stealth ProxyBahn VPC4 allows desktop applications like WebCEO, Advanced Web Ranking, IBP, Scrapebox, Link Assistant, SEO Elite and more to query all major search engines using your keyword lists on a scheduled basis quickly without getting blocked.
Operating System
Linux/Unix, Other 3. 10. 53-56. 140
Delivery Methods
Amazon Machine Image
Pricing Information
Usage Information
Support Information
Customer Reviews
Using ProxyCannon-NG to Create Unlimited Rotating Proxies
As always, please don’t be dumb. Operate within the laws of your country. I am not responsible for anything you may or may not do with this article and the information herein.
This is provided as an educational resource and should not be used to circumvent any security facilities.
The modern age of computers is amazing to me. In a few mere minutes we can spin up a seemingly unlimited number of virtual servers on any one of the hundreds of cloud providers out there around the world.
With all of this added ability of compute power we also increase our ability to distribute our HTTP requests over a larger and larger pool of networks. I came across this awesome project on GitHub called proxycannon-ng.
It started as a hackathon project and has since seemingly halted development, however we can still use their code to spin up a network of computers to proxy our requests through!
They provide a nice diagram to explain what we’re making here. For our examples we’ll only be using AWS but the concept is the same across the board.
Results of What We’re Making:
Creating the Control-Server
The control-server is a OpenVPN server that your workstation will
connect to. This server always remains up. Exit-nodes are systems
connected to the control-server that provides load balancing and
multiple source IP addresses. Exit-nodes can scale up and down to suite
your needs.
AWS (setup the control-server)
#1 – Create a separate SSH key pair
In the AWS console, go to services (upper left)
Select EC2 under the Compute section.
Select Key Pairs in the nav on the left.
Select Create Key Pair and name it ‘proxycannon’.
Download and save the key to ~/
#2 – Launch the control-server instance
Launch (1) Ubuntu Server t1-micro instance and use the proxycannon keypair.
Recommend public AMI ami-0f65671a86f061fcd – Only available in us-east-2) – any Ubuntu Server 18. 04 ami should work.
Login to the control-server via ssh
Download and install proxycannon-ng
$ git clone $ cd proxycannon-ng/setup
$ chmod +x. /
$ sudo. /
#3 – Create a new IAM user, set the needed permissions, and copy over your keys. It’s quick:
Select IAM under the Security, Identity & Compliance section
In IAM, select Users in the nav on the left.
Select Add user
Fill out a User name, and for access type, select programmatic access. Click Next.
Select the tab/box that’s labeled Attach existing policies directly. Add the following policy: AmazonEC2FullAccess. Click Next, than Create user
Copy the access key and secret for the control-server and paste it in ~/
[default]
aws_access_key_id = REPLACE_WITH_YOUR_OWN
aws_secret_access_key = REPLACE_WITH_YOUR_OWN
region = us-east-2
#4 – Setup terraform
Perform the following on the control-server:
Copy your SSH key into ~/
cd into proxycannon-ng/nodes/aws and edit the file updating it with the subnet_id. This is the same subnet_id that your control server is using. You can
get this value from the AWS console when viewing the details of the
control-server instance. Defining this subnet_id makes sure all launched exit-nodes are in the same subnet as your control server.
Run terraform init to download the AWS modules. (you only need to do this once)
#5 – Copy OpenVPN files to your workstation
Copy the following files from the control-server to the /etc/openvpn directory on your workstation:
~/
/etc/openvpn/easy-rsa/keys/
You can also run this script below to compress everything you need to ~/ and then you can download that and extract to /etc/openvpn on your workstation.
# Copy necessary files and compress to ~/
$ mkdir ~/copy_me
$ sudo cp ~/ ~/copy_me
$ sudo cp /etc/openvpn/easy-rsa/keys/ ~/copy_me
$ tar czfv ~/ ~/copy_me
After you copied and extracted the files to /etc/openvpn on your workstation test OpenVPN connectivity from your workstation by running:
$ openvpn –config
Setup Completed!
From now on you’ll only need to connect to the VPN to use proxycannon-ng.
The next section details how to add and remove exit-nodes (source IPs):
Managing exit-nodes
Scaling of exit-nodes is controlled on the control-server using terraform.
Scale up exit-nodes
To create AWS exit-nodes, do the following:
cd into proxycannon-ng/nodes/aws
Edit the count value in to the number of exit-nodes (source IPs) you’d like
run terraform apply to launch the instances.
Scale down exit-nodes
If you want to stop all exit-nodes run terraform destroy.
OR
Scaling down exit-nodes can be done by reducing the count value in and running terraform apply again. Terraform will automatically remove X number of exit-node instances.
Credits to proxycannon-ng team for this awesome project and fundamental documentation!