Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3.4.0 #638

Merged
merged 11 commits into from
Oct 29, 2021
Merged
Binary file added Dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM python:3.8-alpine as base
FROM python:3.10-alpine as base
FROM base as builder

COPY requirements.txt /app/
Expand Down
45 changes: 42 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,6 @@
[![Python 3.x](https://img.shields.io/static/v1?label=Python&message=3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9%20%7C%203.10&color=blue)](https://www.python.org/)
[![Checked with mypy](https://img.shields.io/static/v1?label=MyPy&message=checked&color=blue)](http://mypy-lang.org/)

[![Become a Backer](https://opencollective.com/proxypy/tiers/backer.svg?avatarHeight=72)](https://opencollective.com/proxypy)

# Table of Contents

- [Features](#features)
Expand Down Expand Up @@ -94,6 +92,8 @@
- [Public Key Infrastructure](#pki)
- [API Usage](#api-usage)
- [CLI Usage](#cli-usage)
- [Run Dashboard](#run-dashboard)
- [Inspect Traffic](#inspect-traffic)
- [Frequently Asked Questions](#frequently-asked-questions)
- [Threads vs Threadless](#threads-vs-threadless)
- [SyntaxError: invalid syntax](#syntaxerror-invalid-syntax)
Expand Down Expand Up @@ -1537,6 +1537,45 @@ FILE
/Users/abhinav/Dev/proxy.py/proxy/__init__.py
```

# Run Dashboard

Dashboard is currently under development and not yet bundled with `pip` packages. To run dashboard, you must checkout the source.

Dashboard is written in Typescript and SCSS, so let's build it first using:

```bash
$ make dashboard
```

Now start `proxy.py` with dashboard plugin and by overriding root directory for static server:

```bash
$ proxy --enable-dashboard --static-server-dir dashboard/public
...[redacted]... - Loaded plugin proxy.http.server.HttpWebServerPlugin
...[redacted]... - Loaded plugin proxy.dashboard.dashboard.ProxyDashboard
...[redacted]... - Loaded plugin proxy.dashboard.inspect_traffic.InspectTrafficPlugin
...[redacted]... - Loaded plugin proxy.http.inspector.DevtoolsProtocolPlugin
...[redacted]... - Loaded plugin proxy.http.proxy.HttpProxyPlugin
...[redacted]... - Listening on ::1:8899
...[redacted]... - Core Event enabled
```

Currently, enabling dashboard will also enable all the dashboard plugins.

Visit dashboard:

```bash
$ open http://localhost:8899/dashboard/
```

## Inspect Traffic

Wait for embedded `Chrome Dev Console` to load. Currently, detail about all traffic flowing through `proxy.py` is pushed to the `Inspect Traffic` tab. However, received payloads are not yet integrated with the embedded dev console.

Current functionality can be verified by opening the `Dev Console` of dashboard and inspecting the websocket connection that dashboard established with the `proxy.py` server.

[![Proxy.Py Dashboard Inspect Traffic](https://raw.githubusercontent.com/abhinavsingh/proxy.py/v3.4.0/Dashboard.png)](https://github.com/abhinavsingh/proxy.py)

# Frequently Asked Questions

## Threads vs Threadless
Expand Down Expand Up @@ -1676,7 +1715,7 @@ usage: proxy [-h] [--threadless] [--backlog BACKLOG] [--enable-events] [--hostna
[--ca-file CA_FILE] [--ca-signing-key-file CA_SIGNING_KEY_FILE] [--cert-file CERT_FILE] [--disable-headers DISABLE_HEADERS] [--server-recvbuf-size SERVER_RECVBUF_SIZE] [--basic-auth BASIC_AUTH]
[--cache-dir CACHE_DIR] [--static-server-dir STATIC_SERVER_DIR] [--pac-file PAC_FILE] [--pac-file-url-path PAC_FILE_URL_PATH] [--filtered-client-ips FILTERED_CLIENT_IPS]

proxy.py v2.3.1
proxy.py v2.4.0

options:
-h, --help show this help message and exit
Expand Down
2 changes: 1 addition & 1 deletion proxy/common/version.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@
:copyright: (c) 2013-present by Abhinav Singh and contributors.
:license: BSD, see LICENSE for more details.
"""
VERSION = (2, 3, 1)
VERSION = (2, 4, 0)
__version__ = '.'.join(map(str, VERSION[0:3]))
19 changes: 10 additions & 9 deletions proxy/core/acceptor/pool.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,11 +63,10 @@


class AcceptorPool:
"""AcceptorPool.

Pre-spawns worker processes to utilize all cores available on the system.
"""AcceptorPool pre-spawns worker processes to utilize all cores available on the system.
A server socket is initialized and dispatched over a pipe to these workers.
Each worker process then accepts new client connection.
Each worker process then concurrently accepts new client connection over
the initialized server socket.

Example usage:

Expand All @@ -83,7 +82,9 @@ class AcceptorPool:

Optionally, AcceptorPool also initialize a global event queue.
It is a multiprocess safe queue which can be used to build pubsub patterns
for message sharing or signaling within proxy.py.
for message sharing or signaling.

TODO(abhinavsingh): Decouple event queue setup & teardown into its own class.
"""

def __init__(self, flags: argparse.Namespace,
Expand All @@ -110,9 +111,10 @@ def listen(self) -> None:
self.socket.bind((str(self.flags.hostname), self.flags.port))
self.socket.listen(self.flags.backlog)
self.socket.setblocking(False)
logger.info(
'Listening on %s:%d' %
(self.flags.hostname, self.flags.port))
# Override flags.port to match the actual port
# we are listening upon. This is necessary to preserve
# the server port when `--port=0` is used.
self.flags.port = self.socket.getsockname()[1]

def start_workers(self) -> None:
"""Start worker processes."""
Expand Down Expand Up @@ -172,7 +174,6 @@ def setup(self) -> None:
logger.info('Core Event enabled')
self.start_event_dispatcher()
self.start_workers()

# Send server socket to all acceptor processes.
assert self.socket is not None
for index in range(self.flags.num_workers):
Expand Down
18 changes: 15 additions & 3 deletions proxy/core/acceptor/threadless.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,13 @@


class Threadless(multiprocessing.Process):
"""Threadless provides an event loop. Use it by implementing Threadless class.
"""Threadless process provides an event loop.

Internally, for each client connection, an instance of `work_klass`
is created. Threadless will invoke necessary lifecycle of the `Work` class
allowing implementations to handle accepted client connections as they wish.

Note that, all `Work` implementations share the same underlying event loop.

When --threadless option is enabled, each Acceptor process also
spawns one Threadless process. And instead of spawning new thread
Expand Down Expand Up @@ -92,8 +98,13 @@ async def handle_events(
async def wait_for_tasks(
self, tasks: Dict[int, Any]) -> None:
for work_id in tasks:
# TODO: Resolving one handle_events here can block resolution of
# other tasks
# TODO: Resolving one handle_events here can block
# resolution of other tasks. This can happen when handle_events
# is slow.
#
# Instead of sequential await, a better option would be to await on
# list of async handle_events. This will allow all handlers to run
# concurrently without blocking each other.
try:
teardown = await asyncio.wait_for(tasks[work_id], DEFAULT_TIMEOUT)
if teardown:
Expand Down Expand Up @@ -152,6 +163,7 @@ def run_once(self) -> None:
# until all the logic below completes.
#
# Invoke Threadless.handle_events
#
# TODO: Only send readable / writables that client originally
# registered.
tasks = {}
Expand Down
30 changes: 15 additions & 15 deletions proxy/http/handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
class HttpProtocolHandler(Work):
"""HTTP, HTTPS, HTTP2, WebSockets protocol handler.

Accepts `Client` connection object and manages HttpProtocolHandlerPlugin invocations.
Accepts `Client` connection and delegates to HttpProtocolHandlerPlugin.
"""

def __init__(self, client: TcpClientConnection,
Expand All @@ -86,15 +86,28 @@ def encryption_enabled(self) -> bool:
return self.flags.keyfile is not None and \
self.flags.certfile is not None

def optionally_wrap_socket(
self, conn: socket.socket) -> Union[ssl.SSLSocket, socket.socket]:
"""Attempts to wrap accepted client connection using provided certificates.

Shutdown and closes client connection upon error.
"""
if self.encryption_enabled():
assert self.flags.keyfile and self.flags.certfile
# TODO(abhinavsingh): Insecure TLS versions must not be accepted by default
conn = wrap_socket(conn, self.flags.keyfile, self.flags.certfile)
return conn

def initialize(self) -> None:
"""Optionally upgrades connection to HTTPS, set conn in non-blocking mode and initializes plugins."""
conn = self.optionally_wrap_socket(self.client.connection)
conn.setblocking(False)
# Update client connection reference if connection was wrapped
if self.encryption_enabled():
self.client = TcpClientConnection(conn=conn, addr=self.client.addr)
if b'HttpProtocolHandlerPlugin' in self.flags.plugins:
for klass in self.flags.plugins[b'HttpProtocolHandlerPlugin']:
instance = klass(
instance: HttpProtocolHandlerPlugin = klass(
self.uid,
self.flags,
self.client,
Expand All @@ -115,7 +128,6 @@ def get_events(self) -> Dict[socket.socket, int]:
}
if self.client.has_buffer():
events[self.client.connection] |= selectors.EVENT_WRITE

# HttpProtocolHandlerPlugin.get_descriptors
for plugin in self.plugins.values():
plugin_read_desc, plugin_write_desc = plugin.get_descriptors()
Expand All @@ -129,7 +141,6 @@ def get_events(self) -> Dict[socket.socket, int]:
events[w] = selectors.EVENT_WRITE
else:
events[w] |= selectors.EVENT_WRITE

return events

def handle_events(
Expand Down Expand Up @@ -189,17 +200,6 @@ def shutdown(self) -> None:
logger.debug('Client connection closed')
super().shutdown()

def optionally_wrap_socket(
self, conn: socket.socket) -> Union[ssl.SSLSocket, socket.socket]:
"""Attempts to wrap accepted client connection using provided certificates.

Shutdown and closes client connection upon error.
"""
if self.encryption_enabled():
assert self.flags.keyfile and self.flags.certfile
conn = wrap_socket(conn, self.flags.keyfile, self.flags.certfile)
return conn

def connection_inactive_for(self) -> float:
return time.time() - self.last_activity

Expand Down
12 changes: 12 additions & 0 deletions proxy/http/plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,14 +68,23 @@ def name(self) -> str:
@abstractmethod
def get_descriptors(
self) -> Tuple[List[socket.socket], List[socket.socket]]:
"""Implementations must return a list of descriptions that they wish to
read from and write into."""
return [], [] # pragma: no cover

@abstractmethod
def write_to_descriptors(self, w: Writables) -> bool:
"""Implementations must now write/flush data over the socket.

Note that buffer management is in-build into the connection classes.
Hence implementations MUST call `flush` here, to send any buffered data
over the socket.
"""
return False # pragma: no cover

@abstractmethod
def read_from_descriptors(self, r: Readables) -> bool:
"""Implementations must now read data over the socket."""
return False # pragma: no cover

@abstractmethod
Expand All @@ -96,4 +105,7 @@ def on_response_chunk(self, chunk: List[memoryview]) -> List[memoryview]:

@abstractmethod
def on_client_connection_close(self) -> None:
"""Client connection shutdown has been received, flush has been called,
perform any cleanup work here.
"""
pass # pragma: no cover
39 changes: 36 additions & 3 deletions proxy/http/proxy/plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,16 @@
:copyright: (c) 2013-present by Abhinav Singh and contributors.
:license: BSD, see LICENSE for more details.
"""
from abc import ABC, abstractmethod
import socket
import argparse
from typing import Optional

from uuid import UUID
from ..parser import HttpParser
from typing import List, Optional, Tuple
from abc import ABC, abstractmethod

from ..parser import HttpParser

from ...common.types import Readables, Writables
from ...core.event import EventQueue
from ...core.connection import TcpClientConnection

Expand Down Expand Up @@ -42,6 +45,36 @@ def name(self) -> str:
access a specific plugin by its name."""
return self.__class__.__name__ # pragma: no cover

# TODO(abhinavsingh): get_descriptors, write_to_descriptors, read_from_descriptors
# can be placed into their own abstract class which can then be shared by
# HttpProxyBasePlugin, HttpWebServerBasePlugin and HttpProtocolHandlerPlugin class.
#
# Currently code has been shamelessly copied. Also these methods are not
# marked as abstract to avoid breaking custom plugins written by users for
# previous versions of proxy.py
#
# Since 3.4.0
#
# @abstractmethod
def get_descriptors(
self) -> Tuple[List[socket.socket], List[socket.socket]]:
return [], [] # pragma: no cover

# @abstractmethod
def write_to_descriptors(self, w: Writables) -> bool:
"""Implementations must now write/flush data over the socket.

Note that buffer management is in-build into the connection classes.
Hence implementations MUST call `flush` here, to send any buffered data
over the socket.
"""
return False # pragma: no cover

# @abstractmethod
def read_from_descriptors(self, r: Readables) -> bool:
"""Implementations must now read data over the socket."""
return False # pragma: no cover

@abstractmethod
def before_upstream_connection(
self, request: HttpParser) -> Optional[HttpParser]:
Expand Down
27 changes: 24 additions & 3 deletions proxy/http/proxy/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def __init__(
self.plugins: Dict[str, HttpProxyBasePlugin] = {}
if b'HttpProxyBasePlugin' in self.flags.plugins:
for klass in self.flags.plugins[b'HttpProxyBasePlugin']:
instance = klass(
instance: HttpProxyBasePlugin = klass(
self.uid,
self.flags,
self.client,
Expand All @@ -147,10 +147,25 @@ def get_descriptors(
if self.server and not self.server.closed and \
self.server.has_buffer() and self.server.connection:
w.append(self.server.connection)

# TODO(abhinavsingh): We need to keep a mapping of plugin and
# descriptors registered by them, so that within write/read blocks
# we can invoke the right plugin callbacks.
for plugin in self.plugins.values():
plugin_read_desc, plugin_write_desc = plugin.get_descriptors()
r.extend(plugin_read_desc)
w.extend(plugin_write_desc)

return r, w

def write_to_descriptors(self, w: Writables) -> bool:
if self.request.has_upstream_server() and \
if self.server and self.server.connection not in w:
# Currently, we just call write/read block of each plugins. It is
# plugins responsibility to ignore this callback, if passed descriptors
# doesn't contain the descriptor they registered.
for plugin in self.plugins.values():
plugin.write_to_descriptors(w)
elif self.request.has_upstream_server() and \
self.server and not self.server.closed and \
self.server.has_buffer() and \
self.server.connection in w:
Expand All @@ -172,7 +187,13 @@ def write_to_descriptors(self, w: Writables) -> bool:
return False

def read_from_descriptors(self, r: Readables) -> bool:
if self.request.has_upstream_server() \
if self.server and self.server.connection not in r:
# Currently, we just call write/read block of each plugins. It is
# plugins responsibility to ignore this callback, if passed descriptors
# doesn't contain the descriptor they registered.
for plugin in self.plugins.values():
plugin.write_to_descriptors(r)
elif self.request.has_upstream_server() \
and self.server \
and not self.server.closed \
and self.server.connection in r:
Expand Down
Loading