Compare commits

...

29 Commits

Author SHA1 Message Date
JinU Choi
8e2f1e68cb Merge pull request #21 from dalbodeule/develop
[release] 1.0.0
2025-12-11 19:40:08 +09:00
JinU Choi
983332b3d8 Merge pull request #20 from dalbodeule/feature/grpc-tunneling
[feat] DTLS 기반 HTTP 터널을 gRPC 기반 HTTP/2 터널로 전환
2025-12-11 19:38:50 +09:00
dalbodeule
38f05db0dc [feat](client): add local HTTP proxying for gRPC-based tunnels
- Enhanced gRPC client with logic to forward incoming tunnel streams as HTTP requests to a local target.
- Implemented per-stream state management for matching StreamOpen/StreamData/StreamClose to HTTP requests/responses.
- Added mechanisms to assemble HTTP requests, send them locally, and respond via tunnel streams.
- Introduced a configurable HTTP client with proper headers and connection settings for robust forwarding.
2025-12-11 19:05:26 +09:00
dalbodeule
a41bd34179 [feat](server, errorpages): add gRPC-based tunnel session handling and favicon support
- Implemented gRPC-based tunnel sessions for multiplexing HTTP requests via `grpcTunnelSession` with features like `recvLoop`, `send`, and per-stream state management.
- Registered and unregistered tunnels for domains, replacing DTLS-based sessions for improved scalability and maintainability.
- Integrated domain validation checks during gRPC tunnel handshake with configurable validator support.
- Modified static error pages (`400.html`, `404.html`, `502.html`, `504.html`, `500.html`, `525.html`) to include favicon linking, enhancing error page presentation.
2025-12-11 18:49:56 +09:00
dalbodeule
e388e5a272 [debug](server): add temporary debug log for gRPC routing inspection
- Added a debug log in `grpcOrHTTPHandler` to output protocol, content type, host, and path information.
2025-12-11 17:10:56 +09:00
dalbodeule
d93440f4b3 [chore](docker): remove unused DTLS-related UDP port mapping
- Removed `443/udp` from `EXPOSE` in `Dockerfile.server`.
- Removed UDP port mapping for `443` in `docker-compose.yml`.
2025-12-11 17:00:36 +09:00
dalbodeule
1492a1a82c [feat](protocol): update go_package path and regen related Protobuf types
- Changed `go_package` option in `hopgate_stream.proto` to `internal/protocol/pb;pb`.
- Regenerated `hopgate_stream.pb.go` with updated package path to align with new structure.
- Added `protocol.md` documenting the gRPC-based HTTP tunneling protocol.
2025-12-11 17:00:12 +09:00
dalbodeule
64f730d2df [feat](protocol, client, server): replace DTLS with gRPC for tunnel implementation
- Introduced gRPC-based tunnel design for bi-directional communication, replacing legacy DTLS transport.
- Added `HopGateTunnel` gRPC service with client and server logic for `OpenTunnel` stream handling.
- Updated client to use gRPC tunnel exclusively, including experimental entry point for stream-based HTTP proxying.
- Removed DTLS-specific client, server, and related dependencies (`pion/dtls`).
- Adjusted `cmd/server` to route gRPC and HTTP/HTTPS traffic dynamically on shared ports.
2025-12-11 16:48:17 +09:00
dalbodeule
17839def69 [feat](docs): update ARCHITECTURE.md to reflect gRPC-based tunnel design
- Replaced legacy DTLS transport details with gRPC/HTTP2 tunnel architecture.
- Updated server and client roles to describe gRPC bi-directional stream-based request/response handling.
- Revised internal component descriptions and flow diagrams to align with gRPC-based implementation.
- Marked DTLS sections as deprecated and documented planned removal in future versions.
2025-12-11 16:07:15 +09:00
dalbodeule
faea425e57 [feat](client, server): enable concurrent HTTP stream handling per DTLS session
- Removed session-level serialization lock for HTTP requests (`requestMu`) to support concurrent stream processing.
- Introduced centralized `readLoop` and `streamReceiver` design for stream demultiplexing and individual stream handling.
- Updated client to handle multiple `StreamOpen/StreamData/StreamClose` per session concurrently with per-stream ARQ state.
- Enhanced server logic for efficient HTTP mapping and response streaming in a concurrent stream environment.
2025-12-10 13:05:12 +09:00
JinU Choi
9b7369233c Merge pull request #19 from dalbodeule/copilot/fix-dtls-buffer-error
Fix DTLS buffer size, concurrent request handling, and client frame robustness
2025-12-10 01:26:19 +09:00
dalbodeule
661f8b6413 [feat](server): serialize HTTP requests per DTLS session with session-level mutex
- Added `requestMu` mutex in `dtlsSessionWrapper` to serialize HTTP request handling per DTLS session.
- Prevents interleaved HTTP request streams on clients that process one stream at a time.
- Updated `ForwardHTTP` logic to lock and unlock around HTTP request handling for safe serialization.
- Documented behavior and rationale in `progress.md` for future multiplexing enhancements.
2025-12-10 01:25:56 +09:00
dalbodeule
05dfff21f6 [feat](server, protocol): add sender and receiver ARQ for reliable HTTP stream delivery
- Implemented application-level ARQ with selective retransmission for server-to-client streams, leveraging `StreamAck` logic.
- Added sender-side ARQ state in `streamSender` for tracking and resending unacknowledged frames.
- Introduced receiver-side ARQ with `AckSeq` and `LostSeqs` for handling out-of-order and lost frames.
- Enhanced `dtlsSessionWrapper` to support ARQ management and seamless stream-based DTLS tunneling.
2025-12-10 01:12:58 +09:00
copilot-swe-agent[bot]
446a265fa2 Improve code readability in client stream frame handlers
- Extract stream IDs before logging calls for better readability
- Remove unnecessary anonymous functions
- Address code review feedback

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:46:00 +00:00
copilot-swe-agent[bot]
56916c75f4 Fix client handling of unexpected stream frames at top level
- Add handling for StreamData and StreamClose at top level in client StartLoop
- Log warning and continue instead of crashing on unexpected frames
- Fixes "unsupported envelope type stream_close" error when frames arrive out of order

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:44:31 +00:00
copilot-swe-agent[bot]
887c5fcdff Address code review feedback: cleanup unused code and improve channel safety
- Remove unused streamResponse struct
- Add named constant for response channel buffer size
- Add default case to prevent blocking in readLoop select
- Fix potential double-close of channels during cleanup
- Improve comments explaining concurrency patterns

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:24:56 +00:00
copilot-swe-agent[bot]
ff38ef2828 Fix concurrent request handling with stream multiplexing
- Add channel-based multiplexing to handle concurrent HTTP requests
- Implement background readLoop to dispatch responses to correct streams
- Remove mutex bottleneck that was serializing all requests
- Fixes "unexpected stream_data/stream_open for id" errors with concurrent requests

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:22:29 +00:00
copilot-swe-agent[bot]
1292df33e5 Fix DTLS buffer size issue by wrapping sessions with buffered readers
- Add dtlsReadBufferSize constant (8KB) matching pion/dtls limits
- Wrap DTLS sessions with bufio.Reader in client and server code
- Update tests to use buffered readers for datagram-based connections
- All tests passing successfully

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:07:15 +00:00
copilot-swe-agent[bot]
412b59f420 Initial plan 2025-12-09 13:59:02 +00:00
dalbodeule
1847a264cb [fix](protocol): improve Protobuf decoding with precise payload reading and clarification
- Refactored `Decode` to use `io.ReadFull` for accurate length-prefix and payload reading.
- Simplified logic to avoid mismatched length issues and clarified comments for maintainability.
2025-12-09 20:11:21 +09:00
JinU Choi
d4d6615c0e Merge pull request #18 from dalbodeule/copilot/fix-protobuf-length-prefix-framing
Fix DTLS protobuf codec for UDP datagram boundaries
2025-12-09 20:03:32 +09:00
copilot-swe-agent[bot]
a00c001b49 Improve test documentation for mock datagram connection
Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 10:51:44 +00:00
copilot-swe-agent[bot]
76423627e9 Fix DTLS protobuf codec framing for datagram boundaries
- Modified protobufCodec.Encode() to combine length prefix and protobuf data into a single buffer and write in one call
- Modified protobufCodec.Decode() to read entire datagram in a single Read call
- Added comprehensive tests for datagram-based codec behavior
- Fixes issue #17: proto: cannot parse invalid wire-format data error in DTLS

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 10:49:37 +00:00
copilot-swe-agent[bot]
9a70256d89 Initial plan 2025-12-09 10:44:22 +00:00
dalbodeule
852a22b8d8 [refactor](build): migrate build_server_image.sh to POSIX sh and improve build options
- Rewrote the script for POSIX compliance (`bash` to `sh`).
- Enhanced environment variable handling for optional arguments (`PLATFORM`, `PUSH`).
- Improved readability and added detailed inline comments for maintainability.
2025-12-09 18:45:22 +09:00
dalbodeule
c295d8c20d build_server_image.sh add +x 2025-12-09 18:41:45 +09:00
dalbodeule
1336c540d0 [feat](build): add versioned Docker image build script and version injection
- Introduced `tools/build_server_image.sh` for building versioned server images with support for multi-arch builds.
- Added `VERSION` injection via `-ldflags` in Dockerfile and Go binaries for both server and client.
- Updated workflows and Makefile to ensure consistent version tagging during builds.
2025-12-09 18:41:00 +09:00
dalbodeule
3402616c3e [feat](protocol): regenerate Protobuf Go types from updated hopgate_stream.proto
- Generated `hopgate_stream.pb.go` based on the latest schema for DTLS stream tunneling.
- Added new Protobuf message types, including `Request`, `Response`, `StreamOpen`, `StreamData`, `StreamAck`, `StreamClose`, and `Envelope`.
2025-12-09 18:14:33 +09:00
dalbodeule
715cf6b636 [fix](protocol): improve Protobuf codec buffering for DTLS compatibility
- Updated `Decode` to wrap `io.Reader` in a sufficiently large `bufio.Reader` when handling DTLS sessions, preventing "buffer is too small" errors.
- Enhanced length-prefix reading logic to ensure safe handling of Protobuf envelopes during DTLS stream processing.
- Clarified comments and fixed minor formatting inconsistencies in Protobuf codec documentation.
2025-12-09 17:23:02 +09:00
28 changed files with 3257 additions and 1092 deletions

View File

@@ -56,4 +56,6 @@ jobs:
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
cache-to: type=gha,mode=max
build-args: |
VERSION=${{ github.sha }}

View File

@@ -7,16 +7,21 @@ This document describes the overall architecture of the HopGate system. (en)
## 1. Overview / 전체 개요
- HopGate는 공인 서버와 여러 프라이빗 네트워크 클라이언트 사이에서 HTTP(S) 트래픽을 터널링하는 게이트웨이입니다. (ko)
- HopGate는 공인 서버와 여러 프라이빗 네트워크 클라이언트 사이에서 HTTP(S) 트래픽을 터널링하는 게이트웨이입니다. (ko)
- HopGate is a gateway that tunnels HTTP(S) traffic between a public server and multiple private-network clients. (en)
- 서버는 80/443 포트를 점유하고, ACME(Let's Encrypt 등)로 TLS 인증서를 자동 발급/갱신합니다. (ko)
- 서버는 80/443 포트를 점유하고, ACME(Let's Encrypt 등)로 TLS 인증서를 자동 발급/갱신합니다. (ko)
- The server listens on ports 80/443 and automatically issues/renews TLS certificates using ACME (e.g. Let's Encrypt). (en)
- 클라이언트는 DTLS를 통해 서버에 연결되고, 서버가 전달한 HTTP 요청을 로컬 서비스(127.0.0.1:PORT)에 대신 보내고 응답을 다시 서버로 전달합니다. (ko)
- Clients connect to the server via DTLS, forward HTTP requests to local services (127.0.0.1:PORT), and send the responses back to the server. (en)
- 전송 계층은 **TCP + TLS(HTTPS) + HTTP/2 + gRPC** 기반의 터널을 사용해 서버–클라이언트 간 HTTP 요청/응답을 멀티플렉싱합니다. (ko)
- The transport layer uses a **TCP + TLS (HTTPS) + HTTP/2 + gRPC**-based tunnel to multiplex HTTP requests/responses between server and clients. (en)
- 관리 Plane(REST API)을 통해 도메인 등록/해제 및 클라이언트 API Key 발급을 수행합니다. (ko)
- 클라이언트는 장기 유지 gRPC bi-directional stream 을 통해 서버와 터널을 형성하고,
서버가 전달한 HTTP 요청을 로컬 서비스(127.0.0.1:PORT)에 대신 보내고 응답을 다시 서버로 전달합니다. (ko)
- Clients establish long-lived gRPC bi-directional streams as tunnels to the server,
forward HTTP requests to local services (127.0.0.1:PORT), and send responses back to the server. (en)
- 관리 Plane(REST API)을 통해 도메인 등록/해제 및 클라이언트 API Key 발급을 수행합니다. (ko)
- An admin plane (REST API) is used to register/unregister domains and issue client API keys. (en)
---
@@ -31,8 +36,7 @@ This document describes the overall architecture of the HopGate system. (en)
├── internal/
│ ├── config/ # shared configuration loader
│ ├── acme/ # ACME certificate management
│ ├── dtls/ # DTLS abstraction & implementation
│ ├── proxy/ # HTTP proxy / tunneling core
│ ├── proxy/ # HTTP proxy / tunneling core (gRPC tunnel)
│ ├── protocol/ # server-client message protocol
│ ├── admin/ # admin plane HTTP handlers
│ └── logging/ # structured logging utilities
@@ -46,11 +50,11 @@ This document describes the overall architecture of the HopGate system. (en)
### 2.1 `cmd/`
- [`cmd/server/main.go`](cmd/server/main.go) — 서버 실행 엔트리 포인트. 서버 설정 로딩, ACME/TLS 초기화, HTTP/HTTPS/DTLS 리스너 시작을 담당합니다. (ko)
- [`cmd/server/main.go`](cmd/server/main.go) — Server entrypoint. Loads configuration, initializes ACME/TLS, and starts HTTP/HTTPS/DTLS listeners. (en)
- [`cmd/server/main.go`](cmd/server/main.go) — 서버 실행 엔트리 포인트. 서버 설정 로딩, ACME/TLS 초기화, HTTP/HTTPS 리스너 및 gRPC 터널 엔드포인트 시작을 담당합니다. (ko)
- [`cmd/server/main.go`](cmd/server/main.go) — Server entrypoint. Loads configuration, initializes ACME/TLS, and starts HTTP/HTTPS listeners plus the gRPC tunnel endpoint. (en)
- [`cmd/client/main.go`](cmd/client/main.go) — 클라이언트 실행 엔트리 포인트. 설정 로딩, DTLS 연결 및 핸드셰이크, 로컬 서비스 프록시 루프를 담당합니다. (ko)
- [`cmd/client/main.go`](cmd/client/main.go) — Client entrypoint. Loads configuration, performs DTLS connection and handshake, and runs the local proxy loop. (en)
- [`cmd/client/main.go`](cmd/client/main.go) — 클라이언트 실행 엔트리 포인트. 설정 로딩, gRPC/HTTP2 터널 연결, 로컬 서비스 프록시 루프를 담당합니다. (ko)
- [`cmd/client/main.go`](cmd/client/main.go) — Client entrypoint. Loads configuration, establishes a gRPC/HTTP2 tunnel to the server, and runs the local proxy loop. (en)
---
@@ -72,37 +76,29 @@ This document describes the overall architecture of the HopGate system. (en)
- ACME(예: Let's Encrypt) 클라이언트 래퍼 및 인증서 매니저를 구현하는 패키지입니다. (ko)
- Package that will wrap an ACME client (e.g. Let's Encrypt) and manage certificates. (en)
- 역할 / Responsibilities: (ko/en)
- 메인 도메인 및 프록시 서브도메인용 TLS 인증서 발급/갱신. (ko)
- Issue/renew TLS certificates for main and proxy domains. (en)
- HTTP-01 / TLS-ALPN-01 챌린지 처리 훅 제공. (ko)
- Provide hooks for HTTP-01 / TLS-ALPN-01 challenges. (en)
- HTTPS/DTLS 리스너에 사용할 `*tls.Config` 제공. (ko)
- Provide `*tls.Config` for HTTPS/DTLS listeners. (en)
- 역할 / Responsibilities: (ko/en)
- 메인 도메인 및 프록시 서브도메인용 TLS 인증서 발급/갱신. (ko)
- Issue/renew TLS certificates for main and proxy domains. (en)
- HTTP-01 / TLS-ALPN-01 챌린지 처리 훅 제공. (ko)
- Provide hooks for HTTP-01 / TLS-ALPN-01 challenges. (en)
- HTTPS 및 gRPC 터널 리스너에 사용할 `*tls.Config` 제공. (ko)
- Provide `*tls.Config` for HTTPS and gRPC tunnel listeners. (en)
---
### 2.4 `internal/dtls`
### 2.4 (Reserved for legacy DTLS prototype)
- DTLS 통신을 추상화하고, pion/dtls 기반 구현 및 핸드셰이크 로직을 포함합니다. (ko)
- Abstracts DTLS communication and includes a pion/dtls-based implementation plus handshake logic. (en)
- 주요 요소 / Main elements: (ko/en)
- `Session`, `Server`, `Client` 인터페이스 — DTLS 위의 스트림과 서버/클라이언트를 추상화. (ko)
- `Session`, `Server`, `Client` interfaces — abstract streams and server/client roles over DTLS. (en)
- `NewPionServer`, `NewPionClient` — pion/dtls 를 사용하는 실제 구현. (ko)
- `NewPionServer`, `NewPionClient` — concrete implementations using pion/dtls. (en)
- `PerformServerHandshake`, `PerformClientHandshake` — 도메인 + 클라이언트 API Key 기반 애플리케이션 레벨 핸드셰이크. (ko)
- `PerformServerHandshake`, `PerformClientHandshake` — application-level handshake based on domain + client API key. (en)
- `NewSelfSignedLocalhostConfig` — 디버그용 localhost self-signed TLS 설정을 생성. (ko)
- `NewSelfSignedLocalhostConfig` — generates a debug-only localhost self-signed TLS config. (en)
> 초기 버전에서 DTLS 기반 터널을 실험했으나, 현재 설계에서는 **gRPC/HTTP2 터널만** 사용합니다.
> DTLS 관련 코드는 점진적으로 제거하거나, 별도 브랜치/히스토리에서만 보존할 예정입니다. (ko)
> Early iterations experimented with a DTLS-based tunnel, but the current design uses **gRPC/HTTP2 tunnels only**.
> Any DTLS-related code is planned to be removed or kept only in historical branches. (en)
---
### 2.5 `internal/protocol`
- 서버와 클라이언트가 DTLS 위에서 주고받는 HTTP 요청/응답 메시지 포맷을 정의합니다. (ko)
- Defines HTTP request/response message formats exchanged over DTLS between server and clients. (en)
- 서버와 클라이언트가 **gRPC/HTTP2 터널 전송 계층** 위에서 주고받는 HTTP 요청/응답 및 스트림 메시지 포맷을 정의합니다. (ko)
- Defines HTTP request/response and stream message formats exchanged over the gRPC/HTTP2 tunnel transport layer. (en)
- 요청 메시지 / Request message: (ko/en)
- `RequestID`, `ClientID`, `ServiceName`, `Method`, `URL`, `Header`, `Body`. (ko/en)
@@ -110,13 +106,15 @@ This document describes the overall architecture of the HopGate system. (en)
- 응답 메시지 / Response message: (ko/en)
- `RequestID`, `Status`, `Header`, `Body`, `Error`. (ko/en)
- 인코딩은 현재 JSON 을 사용하며, 각 HTTP 요청/응답을 하나의 Envelope 로 감싸 DTLS 위에서 전송합니다. (ko)
- Encoding currently uses JSON, wrapping each HTTP request/response in a single Envelope sent over DTLS. (en)
- 스트림 기반 터널링을 위한 Envelope/Stream 타입: (ko/en)
- [`Envelope`](internal/protocol/protocol.go:64) — 상위 메시지 컨테이너. (ko/en)
- [`StreamOpen`](internal/protocol/protocol.go:94) — 새로운 스트림 오픈 및 헤더/메타데이터 전달. (ko/en)
- [`StreamData`](internal/protocol/protocol.go:104) — 시퀀스 번호(Seq)를 가진 바디 chunk 프레임. (ko/en)
- [`StreamClose`](internal/protocol/protocol.go:143) — 스트림 종료 및 에러 정보 전달. (ko/en)
- [`StreamAck`](internal/protocol/protocol.go:117) — 선택적 재전송(Selective Retransmission)을 위한 ACK/NACK 힌트. (ko/en)
- 향후에는 `Envelope.StreamOpen` / `StreamData` / `StreamClose` 필드를 활용한 **스트림/프레임 기반 프로토콜**로 전환하여,
대용량 HTTP 바디도 DTLS/UDP MTU 한계를 넘지 않도록 chunk 단위로 안전하게 전송할 계획입니다. (ko)
- In the future, the plan is to move to a **stream/frame-based protocol** using `Envelope.StreamOpen` / `StreamData` / `StreamClose`,
so that large HTTP bodies can be safely chunked under DTLS/UDP MTU limits. (en)
- 이 구조는 Protobuf 기반 length-prefix 프레이밍을 사용하며, gRPC bi-di stream 의 메시지 타입으로 매핑됩니다. (ko)
- This structure uses protobuf-based length-prefixed framing and is mapped onto messages in a gRPC bi-di stream. (en)
---
@@ -127,28 +125,27 @@ This document describes the overall architecture of the HopGate system. (en)
#### 서버 측 역할 / Server-side role
- 공인 HTTPS 엔드포인트에서 들어오는 요청을 수신합니다. (ko)
- 공인 HTTPS 엔드포인트에서 들어오는 요청을 수신합니다. (ko)
- Receive incoming requests on the public HTTPS endpoint. (en)
- 도메인/패스 규칙에 따라 적절한 클라이언트와 서비스로 매핑합니다. (ko)
- 도메인/패스 규칙에 따라 적절한 클라이언트와 서비스로 매핑합니다. (ko)
- Map requests to appropriate clients and services based on domain/path rules. (en)
- 요청을 `protocol.Request` 로 직렬화하여 DTLS 세션을 통해 클라이언트로 전송합니다. (ko)
- Serialize the request as `protocol.Request` and send it over a DTLS session to the client. (en)
- 클라이언트로부터 받은 `protocol.Response` 를 HTTP 응답으로 복원하여 외부 사용자에게 반환합니다. (ko)
- Deserialize `protocol.Response` from the client and return it as an HTTP response to the external user. (en)
- 요청/응답`internal/protocol` 의 스트림 메시지(`StreamOpen` / `StreamData` / `StreamClose` 등)로 직렬화하여
서버–클라이언트 간 gRPC bi-di stream 위에서 주고받습니다. (ko)
- Serialize requests/responses into stream messages from `internal/protocol` (`StreamOpen` / `StreamData` / `StreamClose`, etc.)
and exchange them between server and clients over a gRPC bi-di stream. (en)
#### 클라이언트 측 역할 / Client-side role
- DTLS 채널을 통해 서버가 내려보낸 `protocol.Request` 를 수신합니다. (ko)
- Receive `protocol.Request` objects sent by the server over DTLS. (en)
- 서버가 gRPC 터널을 통해 내려보낸 스트림 메시지를 수신합니다. (ko)
- Receive stream messages sent by the server over the gRPC tunnel. (en)
- 로컬 HTTP 서비스(예: 127.0.0.1:8080)에 요청을 전달하고 응답을 수신합니다. (ko)
- 로컬 HTTP 서비스(예: 127.0.0.1:8080)에 요청을 전달하고 응답을 수신합니다. (ko)
- Forward these requests to local HTTP services (e.g. 127.0.0.1:8080) and collect responses. (en)
- 응답을 `protocol.Response` 로 직렬화하여 DTLS 채널을 통해 서버로 전송합니다. (ko)
- Serialize responses as `protocol.Response` and send them back to the server over DTLS. (en)
- 응답을 동일한 gRPC bi-di stream 상의 역방향 스트림 메시지로 직렬화하여 서버로 전송합니다. (ko)
- Serialize responses as reverse-direction stream messages on the same gRPC bi-di stream and send them back to the server. (en)
---
@@ -192,20 +189,24 @@ An external user sends an HTTPS request to `https://proxy.example.com/service-a/
2. HopGate 서버의 HTTPS 리스너가 요청을 수신합니다. (ko)
The HTTPS listener on the HopGate server receives the request. (en)
3. `proxy` 레이어가 도메인과 경로를 기반으로 이 요청을 처리할 클라이언트(예: client-1)와 해당 로컬 서비스(`service-a`)를 결정합니다. (ko)
3. `proxy` 레이어가 도메인과 경로를 기반으로 이 요청을 처리할 클라이언트(예: client-1)와 해당 로컬 서비스(`service-a`)를 결정합니다. (ko)
The `proxy` layer decides which client (e.g., client-1) and which local service (`service-a`) should handle the request, based on domain and path. (en)
4. 서버는 요청을 `protocol.Request` 구조로 직렬화하고, `dtls.Session` 을 통해 선택된 클라이언트로 전송합니다. (ko)
The server serializes the request into a `protocol.Request` and sends it to the selected client over a `dtls.Session`. (en)
4. 서버는 요청을 `internal/protocol` 의 스트림 메시지(예: `StreamOpen` + 여러 `StreamData` + `StreamClose`)로 직렬화하고,
선택된 클라이언트와 맺은 gRPC bi-di stream 을 통해 전송합니다. (ko)
The server serializes the request into stream messages from `internal/protocol` (e.g., `StreamOpen` + multiple `StreamData` + `StreamClose`)
and sends them over a gRPC bi-di stream to the selected client. (en)
5. 클라이언트의 `proxy` 레이어는 `protocol.Request` 를 수신하고, 로컬 서비스(예: 127.0.0.1:8080)에 HTTP 요청을 수행합니다. (ko)
The clients `proxy` layer receives the `protocol.Request` and performs an HTTP request to a local service (e.g., 127.0.0.1:8080). (en)
5. 클라이언트의 `proxy` 레이어는 이 스트림 메시지들을 수신해 로컬 서비스(예: 127.0.0.1:8080)에 HTTP 요청을 수행합니다. (ko)
The clients `proxy` layer receives these stream messages and performs an HTTP request to a local service (e.g., 127.0.0.1:8080). (en)
6. 클라이언트는 로컬 서비스로부터 HTTP 응답을 수신하고, 이를 `protocol.Response` 로 직렬화하여 DTLS 채널을 통해 서버로 다시 전송합니다. (ko)
The client receives the HTTP response from the local service, serializes it as a `protocol.Response`, and sends it back to the server over DTLS. (en)
6. 클라이언트는 로컬 서비스로부터 HTTP 응답을 수신하고, 이를 역방향 스트림 메시지(`StreamOpen` + 여러 `StreamData` + `StreamClose`)로 직렬화하여
동일한 gRPC bi-di stream 을 통해 서버로 다시 전송합니다. (ko)
The client receives the HTTP response from the local service, serializes it as reverse-direction stream messages
(`StreamOpen` + multiple `StreamData` + `StreamClose`), and sends them back to the server over the same gRPC bi-di stream. (en)
7. 서버는 `protocol.Response` 를 디코딩하여 원래의 HTTPS 요청에 대한 HTTP 응답으로 변환한 뒤, 외부 사용자에게 반환합니다. (ko)
The server decodes the `protocol.Response`, converts it back into an HTTP response, and returns it to the original external user. (en)
7. 서버는 응답 스트림 메시지를 조립해 원래의 HTTPS 요청에 대한 HTTP 응답으로 변환한 뒤, 외부 사용자에게 반환합니다. (ko)
The server reassembles the response stream messages into an HTTP response for the original HTTPS request and returns it to the external user. (en)
![architecture.jpeg](images/architecture.jpeg)
@@ -213,25 +214,26 @@ The server decodes the `protocol.Response`, converts it back into an HTTP respon
## 4. Next Steps / 다음 단계
- 위 아키텍처를 기반으로 디렉터리와 엔트리 포인트를 생성/정리합니다. (ko)
- 위 아키텍처를 기반으로 디렉터리와 엔트리 포인트를 생성/정리합니다. (ko)
- Use this architecture to create/organize directories and entrypoints. (en)
- `internal/config` 에 필요한 설정 필드와 `.env` 로더를 확장합니다. (ko)
- `internal/config` 에 필요한 설정 필드와 `.env` 로더를 확장합니다. (ko)
- Extend `internal/config` with required config fields and `.env` loaders. (en)
- `internal/acme` 에 ACME 클라이언트(certmagic 또는 lego 등)를 연결해 TLS 인증서 발급/갱신을 구현합니다. (ko)
- `internal/acme` 에 ACME 클라이언트(certmagic 또는 lego 등)를 연결해 TLS 인증서 발급/갱신을 구현합니다. (ko)
- Wire an ACME client (certmagic, lego, etc.) into `internal/acme` to implement TLS certificate issuance/renewal. (en)
- `internal/dtls` 에서 pion/dtls 기반 DTLS 전송 계층 및 핸드셰이크를 안정화합니다. (ko)
- Stabilize the pion/dtls-based DTLS transport and handshake logic in `internal/dtls`. (en)
- gRPC/HTTP2 기반 터널 전송 계층을 설계/구현하고, 서버/클라이언트 모두에서 장기 유지 bi-di stream 위에
HTTP 요청/응답을 멀티플렉싱하는 로직을 추가합니다. (ko)
- Design and implement a gRPC/HTTP2-based tunnel transport layer, adding logic on both server and client to multiplex HTTP requests/responses over long-lived bi-di streams. (en)
- `internal/protocol``internal/proxy` 를 통해 실제 HTTP 터널링을 구현하고,
단일 JSON Envelope 기반 모델에서 `StreamOpen` / `StreamData` / `StreamClose` 중심의 스트림 기반 DTLS 터널링으로 전환합니다. (ko)
gRPC 기반 스트림 모델이 재사용할 수 있는 논리 프로토콜로 정리합니다. (ko)
- Implement real HTTP tunneling and routing rules via `internal/protocol` and `internal/proxy`,
and move from a single JSON-Envelope model to a stream-based DTLS tunneling model built around `StreamOpen` / `StreamData` / `StreamClose`. (en)
organizing the logical protocol so that the gRPC-based stream model can reuse it. (en)
- `internal/admin` + `ent` + PostgreSQL 을 사용해 Domain 등록/해제 및 클라이언트 API Key 발급을 완성합니다. (ko)
- `internal/admin` + `ent` + PostgreSQL 을 사용해 Domain 등록/해제 및 클라이언트 API Key 발급을 완성합니다. (ko)
- Complete domain registration/unregistration and client API key issuing using `internal/admin` + `ent` + PostgreSQL. (en)
- 로깅/메트릭을 Prometheus + Loki + Grafana 스택과 연동하여 운영 가시성을 확보합니다. (ko)
- 로깅/메트릭을 Prometheus + Loki + Grafana 스택과 연동하여 운영 가시성을 확보합니다. (ko)
- Integrate logging/metrics with the Prometheus + Loki + Grafana stack to gain operational visibility. (en)

View File

@@ -18,6 +18,8 @@ FROM golang:1.25-alpine AS builder
# 기본값을 지정해두면 로컬 docker build 시에도 별도 인자 없이 빌드 가능합니다.
ARG TARGETOS=linux
ARG TARGETARCH=amd64
# Git 태그/커밋 정보를 main.version 에 주입하기 위한 VERSION 인자 (기본 dev)
ARG VERSION=dev
WORKDIR /src
@@ -32,7 +34,8 @@ RUN go mod download
COPY . .
# 서버 바이너리 빌드 (멀티 아키텍처: TARGETOS/TARGETARCH 기반)
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/hop-gate-server ./cmd/server
# -ldflags 를 통해 main.version 에 VERSION 값을 주입합니다.
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -ldflags "-X main.version=${VERSION}" -o /out/hop-gate-server ./cmd/server
# ---------- Runtime stage ----------
FROM alpine:3.20
@@ -49,7 +52,7 @@ COPY --from=builder /out/hop-gate-server /app/hop-gate-server
COPY .env.example /app/.env.example
# 기본 포트 노출 (실제 포트는 .env / 설정에 따라 변경 가능)
EXPOSE 80 443/udp 443
EXPOSE 80 443
# 기본 실행 명령
ENTRYPOINT ["/app/hop-gate-server"]

View File

@@ -18,7 +18,9 @@ BIN_DIR := ./bin
SERVER_BIN := $(BIN_DIR)/hop-gate-server
CLIENT_BIN := $(BIN_DIR)/hop-gate-client
VERSION ?= $(shell git describe --tags --dirty --always 2>/dev/null || echo dev)
# VERSION 은 현재 커밋의 7글자 SHA 를 사용합니다 (예: 1a2b3c4).
# git 정보가 없으면 dev 로 fallback 합니다.
VERSION ?= $(shell git rev-parse --short=7 HEAD 2>/dev/null || echo dev)
# .env 파일 로드
include .env

View File

@@ -1,20 +1,35 @@
package main
import (
"bytes"
"context"
"crypto/tls"
"crypto/x509"
"flag"
"fmt"
"io"
"net"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"sync"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"github.com/dalbodeule/hop-gate/internal/config"
"github.com/dalbodeule/hop-gate/internal/dtls"
"github.com/dalbodeule/hop-gate/internal/logging"
"github.com/dalbodeule/hop-gate/internal/proxy"
"github.com/dalbodeule/hop-gate/internal/protocol"
protocolpb "github.com/dalbodeule/hop-gate/internal/protocol/pb"
)
// version 은 빌드 시 -ldflags "-X main.version=xxxxxxx" 로 덮어쓰이는 필드입니다.
// 기본값 "dev" 는 로컬 개발용입니다.
var version = "dev"
func getEnvOrPanic(logger logging.Logger, key string) string {
value, exists := os.LookupEnv(key)
if !exists || strings.TrimSpace(value) == "" {
@@ -44,6 +59,639 @@ func firstNonEmpty(values ...string) string {
return ""
}
// runGRPCTunnelClient 는 gRPC 기반 터널을 사용하는 실험적 클라이언트 진입점입니다. (ko)
// runGRPCTunnelClient is an experimental entrypoint for a gRPC-based tunnel client. (en)
func runGRPCTunnelClient(ctx context.Context, logger logging.Logger, finalCfg *config.ClientConfig) error {
// TLS 설정은 기존 DTLS 클라이언트와 동일한 정책을 사용합니다. (ko)
// TLS configuration mirrors the existing DTLS client policy. (en)
var tlsCfg *tls.Config
if finalCfg.Debug {
tlsCfg = &tls.Config{
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
}
} else {
rootCAs, err := x509.SystemCertPool()
if err != nil || rootCAs == nil {
rootCAs = x509.NewCertPool()
}
tlsCfg = &tls.Config{
RootCAs: rootCAs,
MinVersion: tls.VersionTLS12,
}
}
// finalCfg.ServerAddr 가 "host:port" 형태이므로, SNI 에는 DNS(host) 부분만 넣어야 한다.
host := finalCfg.ServerAddr
if h, _, err := net.SplitHostPort(finalCfg.ServerAddr); err == nil && strings.TrimSpace(h) != "" {
host = h
}
tlsCfg.ServerName = host
creds := credentials.NewTLS(tlsCfg)
log := logger.With(logging.Fields{
"component": "grpc_tunnel_client",
"server_addr": finalCfg.ServerAddr,
"domain": finalCfg.Domain,
"local_target": finalCfg.LocalTarget,
})
log.Info("dialing grpc tunnel", nil)
conn, err := grpc.DialContext(ctx, finalCfg.ServerAddr, grpc.WithTransportCredentials(creds), grpc.WithBlock())
if err != nil {
log.Error("failed to dial grpc tunnel server", logging.Fields{
"error": err.Error(),
})
return err
}
defer conn.Close()
client := protocolpb.NewHopGateTunnelClient(conn)
stream, err := client.OpenTunnel(ctx)
if err != nil {
log.Error("failed to open grpc tunnel stream", logging.Fields{
"error": err.Error(),
})
return err
}
log.Info("grpc tunnel stream opened", nil)
// 초기 핸드셰이크: 도메인, API 키, 로컬 타깃 정보를 StreamOpen 헤더로 전송합니다. (ko)
// Initial handshake: send domain, API key, and local target via StreamOpen headers. (en)
headers := map[string]*protocolpb.HeaderValues{
"X-HopGate-Domain": {Values: []string{finalCfg.Domain}},
"X-HopGate-API-Key": {Values: []string{finalCfg.ClientAPIKey}},
"X-HopGate-Local-Target": {Values: []string{finalCfg.LocalTarget}},
}
open := &protocolpb.StreamOpen{
Id: "control-0",
ServiceName: "control",
TargetAddr: "",
Header: headers,
}
env := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: open,
},
}
if err := stream.Send(env); err != nil {
log.Error("failed to send initial stream_open handshake", logging.Fields{
"error": err.Error(),
})
return err
}
log.Info("sent initial stream_open handshake on grpc tunnel", logging.Fields{
"domain": finalCfg.Domain,
"local_target": finalCfg.LocalTarget,
"api_key_mask": maskAPIKey(finalCfg.ClientAPIKey),
})
// 로컬 HTTP 프록시용 HTTP 클라이언트 구성. (ko)
// HTTP client used to forward requests to the local target. (en)
httpClient := &http.Client{
Timeout: 30 * time.Second,
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 10 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
ForceAttemptHTTP2: true,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
},
}
// 서버→클라이언트 방향 StreamOpen/StreamData/StreamClose 를
// HTTP 요청 단위로 모으기 위한 per-stream 상태 테이블입니다. (ko)
// Per-stream state table to assemble HTTP requests from StreamOpen/Data/Close. (en)
type inboundStream struct {
open *protocolpb.StreamOpen
body bytes.Buffer
}
streams := make(map[string]*inboundStream)
var streamsMu sync.Mutex
// gRPC 스트림에 대한 Send 는 동시 호출이 안전하지 않으므로, sendMu 로 직렬화합니다. (ko)
// gRPC streaming Send is not safe for concurrent calls; protect with a mutex. (en)
var sendMu sync.Mutex
sendEnv := func(e *protocolpb.Envelope) error {
sendMu.Lock()
defer sendMu.Unlock()
return stream.Send(e)
}
// 서버에서 전달된 StreamOpen/StreamData/StreamClose 를 로컬 HTTP 요청으로 변환하고,
// 응답을 StreamOpen/StreamData/StreamClose 로 다시 서버에 전송하는 헬퍼입니다. (ko)
// handleStream forwards a single logical HTTP request to the local target
// and sends the response back as StreamOpen/StreamData/StreamClose frames. (en)
handleStream := func(so *protocolpb.StreamOpen, body []byte) {
go func() {
streamID := strings.TrimSpace(so.Id)
if streamID == "" {
log.Error("inbound stream has empty id", logging.Fields{})
return
}
if finalCfg.LocalTarget == "" {
log.Error("local target is empty; cannot forward request", logging.Fields{
"stream_id": streamID,
})
return
}
// Pseudo-headers 에서 메서드/URL/Host 추출. (ko)
// Extract method/URL/host from pseudo-headers. (en)
method := http.MethodGet
if hv, ok := so.Header[protocol.HeaderKeyMethod]; ok && hv != nil && len(hv.Values) > 0 && strings.TrimSpace(hv.Values[0]) != "" {
method = hv.Values[0]
}
urlStr := "/"
if hv, ok := so.Header[protocol.HeaderKeyURL]; ok && hv != nil && len(hv.Values) > 0 && strings.TrimSpace(hv.Values[0]) != "" {
urlStr = hv.Values[0]
}
u, err := url.Parse(urlStr)
if err != nil {
errMsg := fmt.Sprintf("parse url from stream_open: %v", err)
log.Error("failed to parse url from stream_open", logging.Fields{
"stream_id": streamID,
"error": err.Error(),
})
respHeader := map[string]*protocolpb.HeaderValues{
"Content-Type": {
Values: []string{"text/plain; charset=utf-8"},
},
protocol.HeaderKeyStatus: {
Values: []string{strconv.Itoa(http.StatusBadGateway)},
},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err2 := sendEnv(respOpen); err2 != nil {
log.Error("failed to send error stream_open from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: 0,
Data: []byte("HopGate client: " + errMsg),
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
log.Error("failed to send error stream_data from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: errMsg,
},
},
}
if err2 := sendEnv(closeEnv); err2 != nil {
log.Error("failed to send error stream_close from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
}
return
}
u.Scheme = "http"
u.Host = finalCfg.LocalTarget
// 로컬 HTTP 요청용 헤더 구성 (pseudo-headers 제거). (ko)
// Build local HTTP headers, stripping pseudo-headers. (en)
httpHeader := make(http.Header, len(so.Header))
for k, hv := range so.Header {
if k == protocol.HeaderKeyMethod ||
k == protocol.HeaderKeyURL ||
k == protocol.HeaderKeyHost ||
k == protocol.HeaderKeyStatus {
continue
}
if hv == nil {
continue
}
for _, v := range hv.Values {
httpHeader.Add(k, v)
}
}
var reqBody io.Reader
if len(body) > 0 {
reqBody = bytes.NewReader(body)
}
req, err := http.NewRequestWithContext(ctx, method, u.String(), reqBody)
if err != nil {
errMsg := fmt.Sprintf("create http request from stream: %v", err)
log.Error("failed to create local http request", logging.Fields{
"stream_id": streamID,
"error": err.Error(),
})
respHeader := map[string]*protocolpb.HeaderValues{
"Content-Type": {
Values: []string{"text/plain; charset=utf-8"},
},
protocol.HeaderKeyStatus: {
Values: []string{strconv.Itoa(http.StatusBadGateway)},
},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err2 := sendEnv(respOpen); err2 != nil {
log.Error("failed to send error stream_open from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: 0,
Data: []byte("HopGate client: " + errMsg),
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
log.Error("failed to send error stream_data from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: errMsg,
},
},
}
if err2 := sendEnv(closeEnv); err2 != nil {
log.Error("failed to send error stream_close from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
}
return
}
req.Header = httpHeader
if len(body) > 0 {
req.ContentLength = int64(len(body))
}
start := time.Now()
logReq := log.With(logging.Fields{
"component": "grpc_client_proxy",
"stream_id": streamID,
"service": so.ServiceName,
"method": method,
"url": urlStr,
"local_target": finalCfg.LocalTarget,
})
logReq.Info("forwarding stream http request to local target", nil)
res, err := httpClient.Do(req)
if err != nil {
errMsg := fmt.Sprintf("perform local http request: %v", err)
logReq.Error("local http request failed", logging.Fields{
"error": err.Error(),
})
respHeader := map[string]*protocolpb.HeaderValues{
"Content-Type": {
Values: []string{"text/plain; charset=utf-8"},
},
protocol.HeaderKeyStatus: {
Values: []string{strconv.Itoa(http.StatusBadGateway)},
},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err2 := sendEnv(respOpen); err2 != nil {
logReq.Error("failed to send error stream_open from client", logging.Fields{
"error": err2.Error(),
})
return
}
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: 0,
Data: []byte("HopGate client: " + errMsg),
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
logReq.Error("failed to send error stream_data from client", logging.Fields{
"error": err2.Error(),
})
return
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: errMsg,
},
},
}
if err2 := sendEnv(closeEnv); err2 != nil {
logReq.Error("failed to send error stream_close from client", logging.Fields{
"error": err2.Error(),
})
}
return
}
defer res.Body.Close()
// 응답 헤더 맵을 복사하고 상태 코드를 pseudo-header 로 추가합니다. (ko)
// Copy response headers and attach status code as a pseudo-header. (en)
respHeader := make(map[string]*protocolpb.HeaderValues, len(res.Header)+1)
for k, vs := range res.Header {
hv := &protocolpb.HeaderValues{
Values: append([]string(nil), vs...),
}
respHeader[k] = hv
}
statusCode := res.StatusCode
if statusCode == 0 {
statusCode = http.StatusOK
}
respHeader[protocol.HeaderKeyStatus] = &protocolpb.HeaderValues{
Values: []string{strconv.Itoa(statusCode)},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err := sendEnv(respOpen); err != nil {
logReq.Error("failed to send stream response open envelope from client", logging.Fields{
"error": err.Error(),
})
return
}
// 응답 바디를 4KiB(StreamChunkSize) 단위로 잘라 StreamData 프레임으로 전송합니다. (ko)
// Chunk the response body into 4KiB (StreamChunkSize) StreamData frames. (en)
buf := make([]byte, protocol.StreamChunkSize)
var seq uint64
for {
n, err := res.Body.Read(buf)
if n > 0 {
dataCopy := append([]byte(nil), buf[:n]...)
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: seq,
Data: dataCopy,
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
logReq.Error("failed to send stream response data envelope from client", logging.Fields{
"error": err2.Error(),
})
return
}
seq++
}
if err == io.EOF {
break
}
if err != nil {
logReq.Error("failed to read local http response body", logging.Fields{
"error": err.Error(),
})
break
}
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: "",
},
},
}
if err := sendEnv(closeEnv); err != nil {
logReq.Error("failed to send stream response close envelope from client", logging.Fields{
"error": err.Error(),
})
return
}
logReq.Info("stream http response sent from client", logging.Fields{
"status": statusCode,
"elapsed_ms": time.Since(start).Milliseconds(),
"error": "",
})
}()
}
// 수신 루프: 서버에서 들어오는 StreamOpen/StreamData/StreamClose 를
// 로컬 HTTP 요청으로 변환하고 응답을 다시 터널로 전송합니다. (ko)
// Receive loop: convert incoming StreamOpen/StreamData/StreamClose into local
// HTTP requests and send responses back over the tunnel. (en)
for {
if ctx.Err() != nil {
log.Info("context cancelled, closing grpc tunnel client", logging.Fields{
"error": ctx.Err().Error(),
})
return ctx.Err()
}
in, err := stream.Recv()
if err != nil {
if err == io.EOF {
log.Info("grpc tunnel stream closed by server", nil)
return nil
}
log.Error("grpc tunnel receive error", logging.Fields{
"error": err.Error(),
})
return err
}
payloadType := "unknown"
switch payload := in.Payload.(type) {
case *protocolpb.Envelope_HttpRequest:
payloadType = "http_request"
case *protocolpb.Envelope_HttpResponse:
payloadType = "http_response"
case *protocolpb.Envelope_StreamOpen:
payloadType = "stream_open"
so := payload.StreamOpen
if so == nil {
log.Error("received stream_open with nil payload on grpc tunnel client", logging.Fields{})
continue
}
streamID := strings.TrimSpace(so.Id)
if streamID == "" {
log.Error("received stream_open with empty stream id on grpc tunnel client", logging.Fields{})
continue
}
streamsMu.Lock()
if _, exists := streams[streamID]; exists {
log.Error("received duplicate stream_open for existing stream on grpc tunnel client", logging.Fields{
"stream_id": streamID,
})
streamsMu.Unlock()
continue
}
streams[streamID] = &inboundStream{open: so}
streamsMu.Unlock()
case *protocolpb.Envelope_StreamData:
payloadType = "stream_data"
sd := payload.StreamData
if sd == nil {
log.Error("received stream_data with nil payload on grpc tunnel client", logging.Fields{})
continue
}
streamID := strings.TrimSpace(sd.Id)
if streamID == "" {
log.Error("received stream_data with empty stream id on grpc tunnel client", logging.Fields{})
continue
}
streamsMu.Lock()
st := streams[streamID]
streamsMu.Unlock()
if st == nil {
log.Warn("received stream_data for unknown stream on grpc tunnel client", logging.Fields{
"stream_id": streamID,
})
continue
}
if len(sd.Data) > 0 {
if _, err := st.body.Write(sd.Data); err != nil {
log.Error("failed to buffer stream_data body on grpc tunnel client", logging.Fields{
"stream_id": streamID,
"error": err.Error(),
})
}
}
case *protocolpb.Envelope_StreamClose:
payloadType = "stream_close"
sc := payload.StreamClose
if sc == nil {
log.Error("received stream_close with nil payload on grpc tunnel client", logging.Fields{})
continue
}
streamID := strings.TrimSpace(sc.Id)
if streamID == "" {
log.Error("received stream_close with empty stream id on grpc tunnel client", logging.Fields{})
continue
}
streamsMu.Lock()
st := streams[streamID]
if st != nil {
delete(streams, streamID)
}
streamsMu.Unlock()
if st == nil {
log.Warn("received stream_close for unknown stream on grpc tunnel client", logging.Fields{
"stream_id": streamID,
})
continue
}
// 현재까지 수신한 메타데이터/바디를 사용해 로컬 HTTP 요청을 수행하고,
// 응답을 다시 터널로 전송합니다. (ko)
// Use the accumulated metadata/body to perform the local HTTP request and
// send the response back over the tunnel. (en)
bodyCopy := append([]byte(nil), st.body.Bytes()...)
handleStream(st.open, bodyCopy)
case *protocolpb.Envelope_StreamAck:
payloadType = "stream_ack"
// 현재 gRPC 터널에서는 StreamAck 를 사용하지 않습니다. (ko)
// StreamAck is currently unused for gRPC tunnels. (en)
default:
payloadType = fmt.Sprintf("unknown(%T)", in.Payload)
}
log.Info("received envelope on grpc tunnel client", logging.Fields{
"payload_type": payloadType,
})
}
}
func main() {
logger := logging.NewStdJSONLogger("client")
@@ -83,7 +731,7 @@ func main() {
})
// CLI 인자 정의 (env 보다 우선 적용됨)
serverAddrFlag := flag.String("server-addr", "", "DTLS server address (host:port)")
serverAddrFlag := flag.String("server-addr", "", "HopGate server address (host:port)")
domainFlag := flag.String("domain", "", "registered domain (e.g. api.example.com)")
apiKeyFlag := flag.String("api-key", "", "client API key for the domain (64 chars)")
localTargetFlag := flag.String("local-target", "", "local HTTP target (host:port), e.g. 127.0.0.1:8080")
@@ -124,6 +772,7 @@ func main() {
logger.Info("hop-gate client starting", logging.Fields{
"stack": "prometheus-loki-grafana",
"version": version,
"server_addr": finalCfg.ServerAddr,
"domain": finalCfg.Domain,
"local_target": finalCfg.LocalTarget,
@@ -131,78 +780,16 @@ func main() {
"debug": finalCfg.Debug,
})
// 4. DTLS 클라이언트 연결 및 핸드셰이크
ctx := context.Background()
// 디버그 모드에서는 서버 인증서 검증을 스킵(InsecureSkipVerify=true) 하여
// self-signed 테스트 인증서도 신뢰하도록 합니다.
// 운영 환경에서는 Debug=false 로 두고, 올바른 RootCAs / ServerName 을 갖는 tls.Config 를 사용해야 합니다.
var tlsCfg *tls.Config
if finalCfg.Debug {
tlsCfg = &tls.Config{
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
}
} else {
// 운영 모드: 시스템 루트 CA + SNI(ServerName)에 서버 도메인 설정
rootCAs, err := x509.SystemCertPool()
if err != nil || rootCAs == nil {
rootCAs = x509.NewCertPool()
}
tlsCfg = &tls.Config{
RootCAs: rootCAs,
MinVersion: tls.VersionTLS12,
}
}
// DTLS 서버 측은 SNI(ServerName)가 HOP_SERVER_DOMAIN(cfg.Domain)과 일치하는지 검사하므로,
// 클라이언트 TLS 설정에도 반드시 도메인을 설정해준다.
//
// finalCfg.ServerAddr 가 "host:port" 형태이므로, SNI 에는 DNS(host) 부분만 넣어야 한다.
host := finalCfg.ServerAddr
if h, _, err := net.SplitHostPort(finalCfg.ServerAddr); err == nil && strings.TrimSpace(h) != "" {
host = h
}
tlsCfg.ServerName = host
client := dtls.NewPionClient(dtls.PionClientConfig{
Addr: finalCfg.ServerAddr,
TLSConfig: tlsCfg,
})
sess, err := client.Connect()
if err != nil {
logger.Error("failed to establish dtls session", logging.Fields{
"error": err.Error(),
})
os.Exit(1)
}
defer sess.Close()
hsRes, err := dtls.PerformClientHandshake(ctx, sess, logger, finalCfg.Domain, finalCfg.ClientAPIKey, finalCfg.LocalTarget)
if err != nil {
logger.Error("dtls handshake failed", logging.Fields{
// 현재 클라이언트는 DTLS 레이어 없이 gRPC 터널만을 사용합니다. (ko)
// The client now uses only the gRPC tunnel, without any DTLS layer. (en)
if err := runGRPCTunnelClient(ctx, logger, finalCfg); err != nil {
logger.Error("grpc tunnel client exited with error", logging.Fields{
"error": err.Error(),
})
os.Exit(1)
}
logger.Info("dtls handshake completed", logging.Fields{
"domain": hsRes.Domain,
"local_target": finalCfg.LocalTarget,
})
// 5. DTLS 세션 위에서 서버 요청을 처리하는 클라이언트 프록시 루프 시작
clientProxy := proxy.NewClientProxy(logger, finalCfg.LocalTarget)
logger.Info("starting client proxy loop", logging.Fields{
"local_target": finalCfg.LocalTarget,
})
if err := clientProxy.StartLoop(ctx, sess); err != nil {
logger.Error("client proxy loop exited with error", logging.Fields{
"error": err.Error(),
})
os.Exit(1)
}
logger.Info("client proxy loop exited normally", nil)
logger.Info("grpc tunnel client exited normally", nil)
}

File diff suppressed because it is too large Load Diff

View File

@@ -27,7 +27,6 @@ services:
# 외부 80/443 → 컨테이너 8080/8443 매핑 (예: .env.example 기준)
- "80:80" # HTTP
- "443:443" # HTTPS (TCP)
- "443:443/udp" # DTLS (UDP)
volumes:
# ACME 인증서/계정 캐시 디렉터리 (호스트에 지속 보관)

9
go.mod
View File

@@ -7,9 +7,10 @@ require (
github.com/go-acme/lego/v4 v4.28.1
github.com/google/uuid v1.6.0
github.com/lib/pq v1.10.9
github.com/pion/dtls/v3 v3.0.7
github.com/prometheus/client_golang v1.19.0
golang.org/x/net v0.47.0
google.golang.org/grpc v1.76.0
google.golang.org/protobuf v1.36.10
)
require (
@@ -19,15 +20,13 @@ require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/bmatcuk/doublestar v1.3.4 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-openapi/inflect v0.19.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/hashicorp/hcl/v2 v2.18.1 // indirect
github.com/miekg/dns v1.1.68 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/pion/logging v0.2.4 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
@@ -40,5 +39,5 @@ require (
golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/tools v0.38.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8 // indirect
)

36
go.sum
View File

@@ -14,26 +14,30 @@ github.com/bmatcuk/doublestar v1.3.4 h1:gPypJ5xD31uhX6Tf54sDPUOBXTqKH4c9aPY66CyQ
github.com/bmatcuk/doublestar v1.3.4/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-acme/lego/v4 v4.28.1 h1:zt301JYF51UIEkpSXsdeGq9hRePeFzQCq070OdAmP0Q=
github.com/go-acme/lego/v4 v4.28.1/go.mod h1:bzjilr03IgbaOwlH396hq5W56Bi0/uoRwW/JM8hP7m4=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/inflect v0.19.0 h1:9jCH9scKIbHeV9m12SmPilScz6krDxKRasNNSNPXu/4=
github.com/go-openapi/inflect v0.19.0/go.mod h1:lHpZVlpIQqLyKwJ4N+YSc9hchQy/i12fJykb83CRBH4=
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/hcl/v2 v2.18.1 h1:6nxnOJFku1EuSawSD81fuviYUV8DxFr3fp2dUi3ZYSo=
github.com/hashicorp/hcl/v2 v2.18.1/go.mod h1:ThLC89FV4p9MPW804KVbe/cEXoQ8NZEh+JtMeeGErHE=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
@@ -48,12 +52,6 @@ github.com/miekg/dns v1.1.68 h1:jsSRkNozw7G/mnmXULynzMNIsgY2dHC8LO6U6Ij2JEA=
github.com/miekg/dns v1.1.68/go.mod h1:fujopn7TB3Pu3JM69XaawiU0wqjpL9/8xGop5UrTPps=
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
github.com/pion/dtls/v3 v3.0.7 h1:bItXtTYYhZwkPFk4t1n3Kkf5TDrfj6+4wG+CZR8uI9Q=
github.com/pion/dtls/v3 v3.0.7/go.mod h1:uDlH5VPrgOQIw59irKYkMudSFprY9IEFCqz/eTz16f8=
github.com/pion/logging v0.2.4 h1:tTew+7cmQ+Mc1pTBLKH2puKsOvhm32dROumOZ655zB8=
github.com/pion/logging v0.2.4/go.mod h1:DffhXTKYdNZU+KtJ5pyQDjvOAh/GsNSyv1lbkFbe3so=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU=
@@ -74,6 +72,18 @@ github.com/zclconf/go-cty v1.14.4 h1:uXXczd9QDGsgu0i/QFR/hzI5NYCHLf6NQw/atrbnhq8
github.com/zclconf/go-cty v1.14.4/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
github.com/zclconf/go-cty-yaml v1.1.0 h1:nP+jp0qPHv2IhUVqmQSzjvqAWcObN0KBkUl2rWBdig0=
github.com/zclconf/go-cty-yaml v1.1.0/go.mod h1:9YLUH4g7lOhVWqUbctnVlZ5KLpg7JAprQNgxSZ1Gyxs=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc=
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
@@ -88,6 +98,12 @@ golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8 h1:M1rk8KBnUsBDg1oPGHNCxG4vc1f49epmTO7xscSajMk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.76.0 h1:UnVkv1+uMLYXoIz6o7chp59WfQUYA2ex/BXQ9rHZu7A=
google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94ULZ6c=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

Binary file not shown.

Before

Width:  |  Height:  |  Size: 817 KiB

After

Width:  |  Height:  |  Size: 2.6 MiB

View File

@@ -4,7 +4,7 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
=== High-level concept ===
- HopGate is a reverse HTTP gateway.
- A single public server terminates HTTPS and DTLS, and tunnels HTTP traffic to multiple clients.
- A single public server terminates HTTPS and exposes a tunnel endpoint (gRPC/HTTP2) to tunnel HTTP traffic to multiple clients.
- Each client runs in a private network and forwards HTTP requests to local services (127.0.0.1:PORT).
=== Main components to draw ===
@@ -19,8 +19,8 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
- Terminates TLS using ACME certificates for main and proxy domains.
b. "HTTP Listener (TCP 80)"
- Handles ACME HTTP-01 challenges and redirects HTTP to HTTPS.
c. "DTLS Listener (UDP 443 or 8443)"
- Terminates DTLS sessions from multiple clients.
c. "Tunnel Endpoint (gRPC)"
- gRPC/HTTP2 listener on the same HTTPS port (TCP 443) tunnel streams.
d. "Admin API / Management Plane"
- REST API base path: /api/v1/admin
- Endpoints:
@@ -31,8 +31,9 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
- Routes incoming HTTP(S) requests to the correct client based on domain and path.
f. "ACME Certificate Manager"
- Automatically issues and renews TLS certificates (Let's Encrypt).
g. "DTLS Session Manager"
- Manages DTLS connections and per-domain sessions with clients.
g. "Tunnel Session Manager"
- Manages tunnel connections and per-domain sessions with clients
(gRPC streams).
h. "Metrics & Logging"
- Structured JSON logs shipped to Prometheus / Loki / Grafana stack.
@@ -50,35 +51,34 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
- Draw 23 separate client boxes to show that multiple clients can connect.
- Each box titled "HopGate Client".
- Inside each client box, show:
a. "DTLS Client"
- Connects to HopGate Server via DTLS.
- Performs handshake with:
- domain
- client_api_key
a. "Tunnel Client"
- gRPC client that opens a long-lived bi-directional gRPC stream over HTTPS (HTTP/2).
b. "Client Proxy"
- Receives HTTP requests from the server over DTLS.
- Receives HTTP request frames from the server over the tunnel (gRPC stream).
- Forwards them to local services such as:
- 127.0.0.1:8080 (web)
- 127.0.0.1:9000 (admin)
c. "Local Services"
- A small group of boxes representing local HTTP servers.
=== Flows to highlight ===
1) User HTTP Flow
- External user -> HTTPS Listener -> Reverse Proxy Core -> DTLS Session Manager -> Specific HopGate Client -> Local Service -> back through same path to the user.
- External user -> HTTPS Listener -> Reverse Proxy Core ->
gRPC Tunnel Endpoint -> specific HopGate Client (gRPC stream) -> Local Service ->
back through same path to the user.
2) Admin Flow
- Administrator -> Admin API (with Bearer admin key) -> PostgreSQL + ent ORM:
- Register domain + memo -> returns client_api_key.
- Unregister domain + client_api_key.
3) DTLS Handshake Flow
- From client to server over DTLS:
- Client sends {domain, client_api_key}.
- Server validates against PostgreSQL Domain table.
- On success, both sides log:
- server: which domain is bound to the session.
- client: success message, bound domain, and local_target (local service address).
3) Tunnel Handshake / Session Establishment Flow
- v1: DTLS Handshake Flow (legacy) - (REMOVED)
- v2: gRPC Tunnel Establishment Flow:
- From client to server over HTTPS (HTTP/2):
- Client opens a long-lived bi-directional gRPC stream (e.g. OpenTunnel).
- First frame includes {domain, client_api_key} and client metadata.
- Server validates against PostgreSQL Domain table and associates the gRPC stream with that domain.
- Subsequent frames carry HTTP request/response metadata and body chunks.
=== Visual style ===
- Clean flat design, no 3D.
@@ -93,4 +93,4 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
- Database near the server,
- Multiple clients and their local services on the right or bottom.
Please output a single high-resolution architecture diagram that matches this description.
Please output a single high-resolution architecture diagram that matches this description.

View File

@@ -1,218 +1,58 @@
package dtls
import (
"context"
"crypto/tls"
"fmt"
"net"
"time"
piondtls "github.com/pion/dtls/v3"
)
// pionSession 은 pion/dtls.Conn 을 감싸 Session 인터페이스를 구현합니다.
type pionSession struct {
conn *piondtls.Conn
id string
}
func (s *pionSession) Read(b []byte) (int, error) { return s.conn.Read(b) }
func (s *pionSession) Write(b []byte) (int, error) { return s.conn.Write(b) }
func (s *pionSession) Close() error { return s.conn.Close() }
func (s *pionSession) ID() string { return s.id }
// pionServer 는 pion/dtls 기반 Server 구현입니다.
type pionServer struct {
listener net.Listener
}
// PionServerConfig 는 DTLS 서버 리스너 구성을 정의합니다.
// PionServerConfig 는 DTLS 서버 리스너 구성을 정의하는 기존 구조체를 그대로 유지합니다. (ko)
// PionServerConfig keeps the old DTLS server listener configuration shape for compatibility. (en)
type PionServerConfig struct {
// Addr 는 "0.0.0.0:443" 와 같은 UDP 리스닝 주소입니다.
Addr string
// TLSConfig 는 ACME 등을 통해 준비된 tls.Config 입니다.
// Certificates, RootCAs, ClientAuth 등의 설정이 여기서 넘어옵니다.
// nil 인 경우 기본 빈 tls.Config 가 사용됩니다.
Addr string
TLSConfig *tls.Config
}
// NewPionServer 는 pion/dtls 기반 DTLS 서버를 생성합니다.
// 내부적으로 udp 리스너를 열고, DTLS 핸드셰이크를 수행할 준비를 합니다.
func NewPionServer(cfg PionServerConfig) (Server, error) {
if cfg.Addr == "" {
return nil, fmt.Errorf("PionServerConfig.Addr is required")
}
if cfg.TLSConfig == nil {
cfg.TLSConfig = &tls.Config{
MinVersion: tls.VersionTLS12,
}
}
udpAddr, err := net.ResolveUDPAddr("udp", cfg.Addr)
if err != nil {
return nil, fmt.Errorf("resolve udp addr: %w", err)
}
// tls.Config.GetCertificate (crypto/tls) → pion/dtls.GetCertificate 어댑터
var getCert func(*piondtls.ClientHelloInfo) (*tls.Certificate, error)
if cfg.TLSConfig.GetCertificate != nil {
tlsGetCert := cfg.TLSConfig.GetCertificate
getCert = func(chi *piondtls.ClientHelloInfo) (*tls.Certificate, error) {
if chi == nil {
return tlsGetCert(&tls.ClientHelloInfo{})
}
// ACME 매니저는 주로 SNI(ServerName)에 기반해 인증서를 선택하므로,
// 필요한 최소 필드만 복사해서 전달한다.
return tlsGetCert(&tls.ClientHelloInfo{
ServerName: chi.ServerName,
})
}
}
dtlsCfg := &piondtls.Config{
// 서버가 사용할 인증서 설정: 정적 Certificates + GetCertificate 어댑터
Certificates: cfg.TLSConfig.Certificates,
GetCertificate: getCert,
InsecureSkipVerify: cfg.TLSConfig.InsecureSkipVerify,
ClientAuth: piondtls.ClientAuthType(cfg.TLSConfig.ClientAuth),
ClientCAs: cfg.TLSConfig.ClientCAs,
RootCAs: cfg.TLSConfig.RootCAs,
ServerName: cfg.TLSConfig.ServerName,
// 필요 시 ExtendedMasterSecret 등을 추가 설정
}
l, err := piondtls.Listen("udp", udpAddr, dtlsCfg)
if err != nil {
return nil, fmt.Errorf("dtls listen: %w", err)
}
return &pionServer{
listener: l,
}, nil
}
// Accept 는 새로운 DTLS 연결을 수락하고, Session 으로 래핑합니다.
func (s *pionServer) Accept() (Session, error) {
conn, err := s.listener.Accept()
if err != nil {
return nil, err
}
dtlsConn, ok := conn.(*piondtls.Conn)
if !ok {
_ = conn.Close()
return nil, fmt.Errorf("accepted connection is not *dtls.Conn")
}
id := ""
if ra := dtlsConn.RemoteAddr(); ra != nil {
id = ra.String()
}
return &pionSession{
conn: dtlsConn,
id: id,
}, nil
}
// Close 는 DTLS 리스너를 종료합니다.
func (s *pionServer) Close() error {
return s.listener.Close()
}
// pionClient 는 pion/dtls 기반 Client 구현입니다.
type pionClient struct {
addr string
tlsConfig *tls.Config
timeout time.Duration
}
// PionClientConfig 는 DTLS 클라이언트 구성을 정의합니다.
// PionClientConfig 는 DTLS 클라이언트 구성을 정의하는 기존 구조체를 그대로 유지합니다. (ko)
// PionClientConfig keeps the old DTLS client configuration shape for compatibility. (en)
type PionClientConfig struct {
// Addr 는 서버의 UDP 주소 (예: "example.com:443") 입니다.
Addr string
// TLSConfig 는 서버 인증에 사용할 tls.Config 입니다.
// InsecureSkipVerify=true 로 두면 서버 인증을 건너뛰므로 개발/테스트에만 사용해야 합니다.
Addr string
TLSConfig *tls.Config
// Timeout 은 DTLS 핸드셰이크 타임아웃입니다.
// 0 이면 기본값 10초가 사용됩니다.
Timeout time.Duration
Timeout time.Duration
}
// NewPionClient 는 pion/dtls 기반 DTLS 클라이언트를 생성합니다.
func NewPionClient(cfg PionClientConfig) Client {
if cfg.Timeout == 0 {
cfg.Timeout = 10 * time.Second
}
if cfg.TLSConfig == nil {
// 기본값: 인증서 검증을 수행하는 안전한 설정(루트 CA 체인은 시스템 기본값 사용).
// 디버그 모드에서 인증서 검증을 스킵하고 싶다면, 호출 측에서
// TLSConfig: &tls.Config{InsecureSkipVerify: true} 를 명시적으로 전달해야 합니다.
cfg.TLSConfig = &tls.Config{
MinVersion: tls.VersionTLS12,
}
}
return &pionClient{
addr: cfg.Addr,
tlsConfig: cfg.TLSConfig,
timeout: cfg.Timeout,
}
// disabledServer 는 DTLS 전송이 비활성화되었음을 나타내는 더미 구현입니다. (ko)
// disabledServer is a dummy Server implementation indicating that DTLS transport is disabled. (en)
type disabledServer struct{}
func (s *disabledServer) Accept() (Session, error) {
return nil, fmt.Errorf("dtls transport is disabled; use gRPC tunnel instead")
}
// Connect 는 서버와 DTLS 핸드셰이크를 수행하고 Session 을 반환합니다.
func (c *pionClient) Connect() (Session, error) {
if c.addr == "" {
return nil, fmt.Errorf("PionClientConfig.Addr is required")
}
ctx, cancel := context.WithTimeout(context.Background(), c.timeout)
defer cancel()
raddr, err := net.ResolveUDPAddr("udp", c.addr)
if err != nil {
return nil, fmt.Errorf("resolve udp addr: %w", err)
}
dtlsCfg := &piondtls.Config{
// 클라이언트는 서버 인증을 위해 RootCAs/ServerName 만 사용.
// (현재는 클라이언트 인증서 사용 계획이 없으므로 GetCertificate 는 전달하지 않는다.)
Certificates: c.tlsConfig.Certificates,
InsecureSkipVerify: c.tlsConfig.InsecureSkipVerify,
RootCAs: c.tlsConfig.RootCAs,
ServerName: c.tlsConfig.ServerName,
}
type result struct {
conn *piondtls.Conn
err error
}
ch := make(chan result, 1)
go func() {
conn, err := piondtls.Dial("udp", raddr, dtlsCfg)
ch <- result{conn: conn, err: err}
}()
select {
case <-ctx.Done():
return nil, fmt.Errorf("dtls dial timeout: %w", ctx.Err())
case res := <-ch:
if res.err != nil {
return nil, fmt.Errorf("dtls dial: %w", res.err)
}
id := ""
if ra := res.conn.RemoteAddr(); ra != nil {
id = ra.String()
}
return &pionSession{
conn: res.conn,
id: id,
}, nil
}
}
// Close 는 클라이언트 단에서 유지하는 리소스가 없으므로 no-op 입니다.
func (c *pionClient) Close() error {
func (s *disabledServer) Close() error {
return nil
}
// disabledClient 는 DTLS 전송이 비활성화되었음을 나타내는 더미 구현입니다. (ko)
// disabledClient is a dummy Client implementation indicating that DTLS transport is disabled. (en)
type disabledClient struct{}
func (c *disabledClient) Connect() (Session, error) {
return nil, fmt.Errorf("dtls transport is disabled; use gRPC tunnel instead")
}
func (c *disabledClient) Close() error {
return nil
}
// NewPionServer 는 더 이상 실제 DTLS 서버를 생성하지 않고, 항상 에러를 반환합니다. (ko)
// NewPionServer no longer creates a real DTLS server and always returns an error. (en)
func NewPionServer(cfg PionServerConfig) (Server, error) {
return nil, fmt.Errorf("dtls transport is disabled; NewPionServer is no longer supported")
}
// NewPionClient 는 더 이상 실제 DTLS 클라이언트를 생성하지 않고, disabledClient 를 반환합니다. (ko)
// NewPionClient no longer creates a real DTLS client and instead returns a disabledClient. (en)
func NewPionClient(cfg PionClientConfig) Client {
return &disabledClient{}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

View File

@@ -6,6 +6,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Tailwind CSS is served separately from /__hopgate_assets__/errors.css -->
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>404 Not Found - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>500 Internal Server Error - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>502 Bad Gateway - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>504 Gateway Timeout - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>525 TLS Handshake Failed - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -16,6 +16,16 @@ import (
// This matches existing 64KiB readers used around DTLS sessions (used by the JSON codec).
const defaultDecoderBufferSize = 64 * 1024
// dtlsReadBufferSize 는 pion/dtls 내부 버퍼 한계에 맞춘 읽기 버퍼 크기입니다.
// pion/dtls 의 UnpackDatagram 함수는 8KB (8,192 bytes) 의 기본 수신 버퍼를 사용합니다.
// DTLS는 UDP 기반이므로 한 번의 Read()에서 전체 datagram을 읽어야 하며,
// 이 크기를 초과하는 DTLS 레코드는 처리되지 않습니다.
// dtlsReadBufferSize matches the pion/dtls internal buffer limit.
// pion/dtls's UnpackDatagram function uses an 8KB (8,192 bytes) receive buffer.
// Since DTLS is UDP-based, the entire datagram must be read in a single Read() call,
// and DTLS records exceeding this size cannot be processed.
const dtlsReadBufferSize = 8 * 1024 // 8KB
// maxProtoEnvelopeBytes 는 단일 Protobuf Envelope 의 최대 크기에 대한 보수적 상한입니다.
// 아직 하드 리미트로 사용하지는 않지만, 향후 방어적 체크에 사용할 수 있습니다.
const maxProtoEnvelopeBytes = 512 * 1024 // 512KiB, 충분히 여유 있는 값
@@ -46,12 +56,15 @@ func (jsonCodec) Decode(r io.Reader, env *Envelope) error {
return dec.Decode(env)
}
// protobufCodec 은 Protobuf + length-prefix framing 기반 WireCodec 구현입니다.
// 한 Envelope 당 [4바이트 big-endian 길이] + [protobuf bytes] 형태로 인코딩합니다.
// protobufCodec 은 Protobuf length-prefix framing 기반 WireCodec 구현입니다.
// 한 Envelope 당 [4바이트 big-endian 길이] [protobuf bytes] 형태로 인코딩합니다.
type protobufCodec struct{}
// Encode 는 Envelope 를 Protobuf Envelope 로 변환한 뒤, length-prefix 프레이밍으로 기록합니다.
// DTLS는 UDP 기반이므로, length prefix와 protobuf 데이터를 단일 버퍼로 합쳐 하나의 Write로 전송합니다.
// Encode encodes an Envelope as a length-prefixed protobuf message.
// For DTLS (UDP-based), we combine the length prefix and protobuf data into a single buffer
// and send it with a single Write call to preserve message boundaries.
func (protobufCodec) Encode(w io.Writer, env *Envelope) error {
pbEnv, err := toProtoEnvelope(env)
if err != nil {
@@ -83,44 +96,50 @@ func (protobufCodec) Encode(w io.Writer, env *Envelope) error {
return fmt.Errorf("protobuf codec: empty marshaled envelope")
}
var lenBuf [4]byte
if len(data) > int(^uint32(0)) {
return fmt.Errorf("protobuf codec: envelope too large: %d bytes", len(data))
}
binary.BigEndian.PutUint32(lenBuf[:], uint32(len(data)))
if _, err := w.Write(lenBuf[:]); err != nil {
return fmt.Errorf("protobuf codec: write length prefix: %w", err)
}
if _, err := w.Write(data); err != nil {
return fmt.Errorf("protobuf codec: write payload: %w", err)
// DTLS 환경에서는 length prefix와 protobuf 데이터를 단일 버퍼로 합쳐서 하나의 Write로 전송
// For DTLS, combine length prefix and protobuf data into a single buffer
frame := make([]byte, 4+len(data))
binary.BigEndian.PutUint32(frame[:4], uint32(len(data)))
copy(frame[4:], data)
if _, err := w.Write(frame); err != nil {
return fmt.Errorf("protobuf codec: write frame: %w", err)
}
return nil
}
// Decode 는 length-prefix 프레임에서 Protobuf Envelope 를 읽어들여
// 내부 Envelope 구조체로 변환합니다.
// DTLS는 UDP 기반이므로, 한 번의 Read로 전체 데이터그램을 읽습니다.
// Decode reads a length-prefixed protobuf Envelope and converts it into the internal Envelope.
// For DTLS (UDP-based), we read the entire datagram in a single Read call.
func (protobufCodec) Decode(r io.Reader, env *Envelope) error {
var lenBuf [4]byte
if _, err := io.ReadFull(r, lenBuf[:]); err != nil {
// 1) 길이 prefix 4바이트를 정확히 읽는다.
header := make([]byte, 4)
if _, err := io.ReadFull(r, header); err != nil {
return fmt.Errorf("protobuf codec: read length prefix: %w", err)
}
n := binary.BigEndian.Uint32(lenBuf[:])
if n == 0 {
length := binary.BigEndian.Uint32(header)
if length == 0 {
return fmt.Errorf("protobuf codec: zero-length envelope")
}
if n > maxProtoEnvelopeBytes {
return fmt.Errorf("protobuf codec: envelope too large: %d bytes (max %d)", n, maxProtoEnvelopeBytes)
if length > maxProtoEnvelopeBytes {
return fmt.Errorf("protobuf codec: envelope too large: %d bytes (max %d)", length, maxProtoEnvelopeBytes)
}
buf := make([]byte, int(n))
if _, err := io.ReadFull(r, buf); err != nil {
// 2) payload 를 length 바이트만큼 정확히 읽는다.
payload := make([]byte, int(length))
if _, err := io.ReadFull(r, payload); err != nil {
return fmt.Errorf("protobuf codec: read payload: %w", err)
}
var pbEnv protocolpb.Envelope
if err := proto.Unmarshal(buf, &pbEnv); err != nil {
if err := proto.Unmarshal(payload, &pbEnv); err != nil {
return fmt.Errorf("protobuf codec: unmarshal envelope: %w", err)
}
@@ -128,9 +147,18 @@ func (protobufCodec) Decode(r io.Reader, env *Envelope) error {
}
// DefaultCodec 은 현재 런타임에서 사용하는 기본 WireCodec 입니다.
// 이제 Protobuf 기반 codec 을 기본으로 사용합니다.
// 현재는 Protobuf length-prefix 기반 codec 을 기본으로 사용합니다.
// 서버와 클라이언트가 모두 이 버전을 사용해야 wire-format 이 일치합니다.
var DefaultCodec WireCodec = protobufCodec{}
// GetDTLSReadBufferSize 는 DTLS 세션 읽기에 사용할 버퍼 크기를 반환합니다.
// 이 값은 pion/dtls 내부 버퍼 한계(8KB)에 맞춰져 있습니다.
// GetDTLSReadBufferSize returns the buffer size to use for reading from DTLS sessions.
// This value is aligned with pion/dtls's internal buffer limit (8KB).
func GetDTLSReadBufferSize() int {
return dtlsReadBufferSize
}
// toProtoEnvelope 는 내부 Envelope 구조체를 Protobuf Envelope 로 변환합니다.
// 현재 구현은 HTTP 요청/응답 및 스트림 관련 타입(StreamOpen/StreamData/StreamClose/StreamAck)을 지원합니다.
func toProtoEnvelope(env *Envelope) (*protocolpb.Envelope, error) {

View File

@@ -0,0 +1,226 @@
package protocol
import (
"bufio"
"bytes"
"io"
"testing"
)
// mockDatagramConn simulates a datagram-based connection (like DTLS over UDP)
// where each Write sends a separate message and each Read receives a complete message.
// This mock verifies the FIXED behavior where the codec properly handles message boundaries.
type mockDatagramConn struct {
messages [][]byte
readIdx int
}
func newMockDatagramConn() *mockDatagramConn {
return &mockDatagramConn{
messages: make([][]byte, 0),
}
}
func (m *mockDatagramConn) Write(p []byte) (n int, err error) {
// Simulate datagram behavior: each Write is a separate message
msg := make([]byte, len(p))
copy(msg, p)
m.messages = append(m.messages, msg)
return len(p), nil
}
func (m *mockDatagramConn) Read(p []byte) (n int, err error) {
// Simulate datagram behavior: each Read returns a complete message
if m.readIdx >= len(m.messages) {
return 0, io.EOF
}
msg := m.messages[m.readIdx]
m.readIdx++
if len(p) < len(msg) {
return 0, io.ErrShortBuffer
}
copy(p, msg)
return len(msg), nil
}
// TestProtobufCodecDatagramBehavior tests that the protobuf codec works correctly
// with datagram-based transports (like DTLS over UDP) where message boundaries are preserved.
func TestProtobufCodecDatagramBehavior(t *testing.T) {
codec := protobufCodec{}
conn := newMockDatagramConn()
// Create a test envelope
testEnv := &Envelope{
Type: MessageTypeHTTP,
HTTPRequest: &Request{
RequestID: "test-req-123",
ClientID: "client-1",
ServiceName: "test-service",
Method: "GET",
URL: "/test/path",
Header: map[string][]string{
"User-Agent": {"test-client"},
},
Body: []byte("test body content"),
},
}
// Encode the envelope
if err := codec.Encode(conn, testEnv); err != nil {
t.Fatalf("Failed to encode envelope: %v", err)
}
// Verify that exactly one message was written (length prefix + data in single Write)
if len(conn.messages) != 1 {
t.Fatalf("Expected 1 message to be written, got %d", len(conn.messages))
}
// Verify the message structure: [4-byte length][protobuf data]
msg := conn.messages[0]
if len(msg) < 4 {
t.Fatalf("Message too short: %d bytes", len(msg))
}
// Decode the envelope using a buffered reader (as we do in actual code)
// to handle datagram-based reading properly
reader := bufio.NewReaderSize(conn, GetDTLSReadBufferSize())
var decodedEnv Envelope
if err := codec.Decode(reader, &decodedEnv); err != nil {
t.Fatalf("Failed to decode envelope: %v", err)
}
// Verify the decoded envelope matches the original
if decodedEnv.Type != testEnv.Type {
t.Errorf("Type mismatch: got %v, want %v", decodedEnv.Type, testEnv.Type)
}
if decodedEnv.HTTPRequest == nil {
t.Fatal("HTTPRequest is nil after decode")
}
if decodedEnv.HTTPRequest.RequestID != testEnv.HTTPRequest.RequestID {
t.Errorf("RequestID mismatch: got %v, want %v", decodedEnv.HTTPRequest.RequestID, testEnv.HTTPRequest.RequestID)
}
if decodedEnv.HTTPRequest.Method != testEnv.HTTPRequest.Method {
t.Errorf("Method mismatch: got %v, want %v", decodedEnv.HTTPRequest.Method, testEnv.HTTPRequest.Method)
}
if decodedEnv.HTTPRequest.URL != testEnv.HTTPRequest.URL {
t.Errorf("URL mismatch: got %v, want %v", decodedEnv.HTTPRequest.URL, testEnv.HTTPRequest.URL)
}
if !bytes.Equal(decodedEnv.HTTPRequest.Body, testEnv.HTTPRequest.Body) {
t.Errorf("Body mismatch: got %v, want %v", decodedEnv.HTTPRequest.Body, testEnv.HTTPRequest.Body)
}
}
// TestProtobufCodecStreamData tests encoding/decoding of StreamData messages
func TestProtobufCodecStreamData(t *testing.T) {
codec := protobufCodec{}
conn := newMockDatagramConn()
// Create a StreamData envelope
testEnv := &Envelope{
Type: MessageTypeStreamData,
StreamData: &StreamData{
ID: StreamID("stream-123"),
Seq: 42,
Data: []byte("stream data payload"),
},
}
// Encode
if err := codec.Encode(conn, testEnv); err != nil {
t.Fatalf("Failed to encode StreamData: %v", err)
}
// Verify single message
if len(conn.messages) != 1 {
t.Fatalf("Expected 1 message, got %d", len(conn.messages))
}
// Decode using a buffered reader (as we do in actual code)
reader := bufio.NewReaderSize(conn, GetDTLSReadBufferSize())
var decodedEnv Envelope
if err := codec.Decode(reader, &decodedEnv); err != nil {
t.Fatalf("Failed to decode StreamData: %v", err)
}
// Verify
if decodedEnv.Type != MessageTypeStreamData {
t.Errorf("Type mismatch: got %v, want %v", decodedEnv.Type, MessageTypeStreamData)
}
if decodedEnv.StreamData == nil {
t.Fatal("StreamData is nil")
}
if decodedEnv.StreamData.ID != testEnv.StreamData.ID {
t.Errorf("StreamID mismatch: got %v, want %v", decodedEnv.StreamData.ID, testEnv.StreamData.ID)
}
if decodedEnv.StreamData.Seq != testEnv.StreamData.Seq {
t.Errorf("Seq mismatch: got %v, want %v", decodedEnv.StreamData.Seq, testEnv.StreamData.Seq)
}
if !bytes.Equal(decodedEnv.StreamData.Data, testEnv.StreamData.Data) {
t.Errorf("Data mismatch: got %v, want %v", decodedEnv.StreamData.Data, testEnv.StreamData.Data)
}
}
// TestProtobufCodecMultipleMessages tests encoding/decoding multiple messages
func TestProtobufCodecMultipleMessages(t *testing.T) {
codec := protobufCodec{}
conn := newMockDatagramConn()
// Create multiple test envelopes
envelopes := []*Envelope{
{
Type: MessageTypeStreamOpen,
StreamOpen: &StreamOpen{
ID: StreamID("stream-1"),
Service: "test-service",
TargetAddr: "127.0.0.1:8080",
},
},
{
Type: MessageTypeStreamData,
StreamData: &StreamData{
ID: StreamID("stream-1"),
Seq: 1,
Data: []byte("first chunk"),
},
},
{
Type: MessageTypeStreamData,
StreamData: &StreamData{
ID: StreamID("stream-1"),
Seq: 2,
Data: []byte("second chunk"),
},
},
{
Type: MessageTypeStreamClose,
StreamClose: &StreamClose{
ID: StreamID("stream-1"),
Error: "",
},
},
}
// Encode all messages
for i, env := range envelopes {
if err := codec.Encode(conn, env); err != nil {
t.Fatalf("Failed to encode message %d: %v", i, err)
}
}
// Verify that each encode produced exactly one message
if len(conn.messages) != len(envelopes) {
t.Fatalf("Expected %d messages, got %d", len(envelopes), len(conn.messages))
}
// Decode and verify all messages using a buffered reader (as we do in actual code)
reader := bufio.NewReaderSize(conn, GetDTLSReadBufferSize())
for i := 0; i < len(envelopes); i++ {
var decoded Envelope
if err := codec.Decode(reader, &decoded); err != nil {
t.Fatalf("Failed to decode message %d: %v", i, err)
}
if decoded.Type != envelopes[i].Type {
t.Errorf("Message %d type mismatch: got %v, want %v", i, decoded.Type, envelopes[i].Type)
}
}
}

View File

@@ -2,7 +2,7 @@ syntax = "proto3";
package hopgate.protocol.v1;
option go_package = "github.com/dalbodeule/hop-gate/internal/protocol/pb;protocolpb";
option go_package = "internal/protocol/pb;pb";
// HeaderValues 는 HTTP 헤더의 다중 값 표현을 위한 래퍼입니다.
// HeaderValues wraps multiple header values for a single HTTP header key.

View File

@@ -718,7 +718,7 @@ const file_internal_protocol_hopgate_stream_proto_rawDesc = "" +
"\fstream_close\x18\x05 \x01(\v2 .hopgate.protocol.v1.StreamCloseH\x00R\vstreamClose\x12?\n" +
"\n" +
"stream_ack\x18\x06 \x01(\v2\x1e.hopgate.protocol.v1.StreamAckH\x00R\tstreamAckB\t\n" +
"\apayloadB@Z>github.com/dalbodeule/hop-gate/internal/protocol/pb;protocolpbb\x06proto3"
"\apayloadB\x19Z\x17internal/protocol/pb;pbb\x06proto3"
var (
file_internal_protocol_hopgate_stream_proto_rawDescOnce sync.Once

View File

@@ -0,0 +1,119 @@
package pb
import (
"context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// HopGateTunnelClient is the client API for the HopGateTunnel service.
type HopGateTunnelClient interface {
// OpenTunnel establishes a long-lived bi-directional stream between
// a HopGate client and the server. Both HTTP requests and responses
// are multiplexed as Envelope messages on this stream.
OpenTunnel(ctx context.Context, opts ...grpc.CallOption) (HopGateTunnel_OpenTunnelClient, error)
}
type hopGateTunnelClient struct {
cc grpc.ClientConnInterface
}
// NewHopGateTunnelClient creates a new HopGateTunnelClient.
func NewHopGateTunnelClient(cc grpc.ClientConnInterface) HopGateTunnelClient {
return &hopGateTunnelClient{cc: cc}
}
func (c *hopGateTunnelClient) OpenTunnel(ctx context.Context, opts ...grpc.CallOption) (HopGateTunnel_OpenTunnelClient, error) {
stream, err := c.cc.NewStream(ctx, &_HopGateTunnel_serviceDesc.Streams[0], "/hopgate.protocol.v1.HopGateTunnel/OpenTunnel", opts...)
if err != nil {
return nil, err
}
return &hopGateTunnelOpenTunnelClient{ClientStream: stream}, nil
}
// HopGateTunnel_OpenTunnelClient is the client-side stream for OpenTunnel.
type HopGateTunnel_OpenTunnelClient interface {
Send(*Envelope) error
Recv() (*Envelope, error)
grpc.ClientStream
}
type hopGateTunnelOpenTunnelClient struct {
grpc.ClientStream
}
func (x *hopGateTunnelOpenTunnelClient) Send(m *Envelope) error {
return x.ClientStream.SendMsg(m)
}
func (x *hopGateTunnelOpenTunnelClient) Recv() (*Envelope, error) {
m := new(Envelope)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// HopGateTunnelServer is the server API for the HopGateTunnel service.
type HopGateTunnelServer interface {
// OpenTunnel handles a long-lived bi-directional stream between the server
// and a HopGate client. Implementations are responsible for reading and
// writing Envelope messages on the stream.
OpenTunnel(HopGateTunnel_OpenTunnelServer) error
}
// UnimplementedHopGateTunnelServer can be embedded to have forward compatible implementations.
type UnimplementedHopGateTunnelServer struct{}
// OpenTunnel returns an Unimplemented error by default.
func (UnimplementedHopGateTunnelServer) OpenTunnel(HopGateTunnel_OpenTunnelServer) error {
return status.Errorf(codes.Unimplemented, "method OpenTunnel not implemented")
}
// RegisterHopGateTunnelServer registers the HopGateTunnel service with the given gRPC server.
func RegisterHopGateTunnelServer(s grpc.ServiceRegistrar, srv HopGateTunnelServer) {
s.RegisterService(&_HopGateTunnel_serviceDesc, srv)
}
// HopGateTunnel_OpenTunnelServer is the server-side stream for OpenTunnel.
type HopGateTunnel_OpenTunnelServer interface {
Send(*Envelope) error
Recv() (*Envelope, error)
grpc.ServerStream
}
func _HopGateTunnel_OpenTunnel_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(HopGateTunnelServer).OpenTunnel(&hopGateTunnelOpenTunnelServer{ServerStream: stream})
}
type hopGateTunnelOpenTunnelServer struct {
grpc.ServerStream
}
func (x *hopGateTunnelOpenTunnelServer) Send(m *Envelope) error {
return x.ServerStream.SendMsg(m)
}
func (x *hopGateTunnelOpenTunnelServer) Recv() (*Envelope, error) {
m := new(Envelope)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
var _HopGateTunnel_serviceDesc = grpc.ServiceDesc{
ServiceName: "hopgate.protocol.v1.HopGateTunnel",
HandlerType: (*HopGateTunnelServer)(nil),
Streams: []grpc.StreamDesc{
{
StreamName: "OpenTunnel",
Handler: _HopGateTunnel_OpenTunnel_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "internal/protocol/hopgate_stream.proto",
}

File diff suppressed because it is too large Load Diff

View File

@@ -224,182 +224,46 @@ This document tracks implementation progress against the HopGate architecture an
---
### 3.3 Proxy Core / HTTP Tunneling
### 3.3 Proxy Core / gRPC Tunneling
- [ ] 서버 측 Proxy 구현 확장: [`internal/proxy/server.go`](internal/proxy/server.go)
- 현재 `ServerProxy` / `Router` 인터페이스와 `NewHTTPServer` 만 정의되어 있고,
실제 HTTP/HTTPS 리스너와 DTLS 세션 매핑 로직은 [`cmd/server/main.go`](cmd/server/main.go) 의
`newHTTPHandler` / `dtlsSessionWrapper.ForwardHTTP` 안에 위치합니다.
- Proxy 코어 로직을 proxy 레이어로 이동하는 리팩터링은 아직 진행되지 않았습니다. (3.6 항목과 연동)
HopGate 의 최종 목표는 **TCP + TLS(HTTPS) + HTTP/2 + gRPC** 기반 터널로 HTTP 트래픽을 전달하는 것입니다.
이 섹션에서는 DTLS 기반 초기 설계를 정리만 남기고, 실제 구현/남은 작업은 gRPC 터널 기준으로 재정의합니다.
- [x] 클라이언트 측 Proxy 구현 확장: [`internal/proxy/client.go`](internal/proxy/client.go)
- DTLS 세션에서 `protocol.Request` 수신 → 로컬 HTTP 호출 → `protocol.Response` 전송 루프 구현.
- timeout/취소/에러 처리.
- [x] 서버 측 gRPC 터널 엔드포인트 설계/구현
- 외부 사용자용 HTTPS(443/TCP)와 같은 포트에서:
- 일반 HTTP 요청(브라우저/REST)은 기존 리버스 프록시 경로로,
- `Content-Type: application/grpc` 인 요청은 클라이언트 터널용 gRPC 서버로
라우팅하는 구조를 설계합니다.
- 예시: `rpc OpenTunnel(stream TunnelFrame) returns (stream TunnelFrame)` (bi-directional streaming).
- HTTP/2 + ALPN(h2)을 사용해 gRPC 스트림을 유지하고, 요청/응답 HTTP 메시지를 `TunnelFrame`으로 멀티플렉싱합니다.
- [x] 서버 main 에 Proxy wiring 추가: [`cmd/server/main.go`](cmd/server/main.go)
- DTLS handshake 완료된 세션을 Proxy 라우팅 테이블에 등록.
- HTTPS 서버와 Proxy 핸들러 연결.
- [x] 클라이언트 측 gRPC 터널 설계/구현
- 클라이언트 프로세스는 HopGate 서버로 장기 유지 bi-di gRPC 스트림을 **하나(또는 소수 개)** 연 상태로 유지합니다.
- 서버로부터 들어오는 `TunnelFrame`(요청 메타데이터 + 바디 chunk)을 수신해,
로컬 HTTP 서비스(예: `127.0.0.1:8080`)로 proxy 하고, 응답을 다시 `TunnelFrame` 시퀀스로 전송합니다.
- 기존 `internal/proxy/client.go` 의 HTTP 매핑/스트림 ARQ 경험을, gRPC 메시지 단위 chunk/flow-control 설계에 참고합니다.
- [x] 클라이언트 main 에 Proxy loop wiring 추가: [`cmd/client/main.go`](cmd/client/main.go)
- handshake 성공 후 `proxy.ClientProxy.StartLoop` 실행.
- [x] HTTP ↔ gRPC 터널 매핑 규약 정의
- 한 HTTP 요청/응답 쌍을 gRPC 스트림 상에서 어떻게 표현할지 스키마를 정의합니다:
- 요청: `StreamID`, method, URL, headers, body chunks
- 응답: `StreamID`, status, headers, body chunks, error
- 현재 `internal/protocol/protocol.go`의 논리 모델(Envelope/StreamOpen/StreamData/StreamClose/StreamAck)을
gRPC 메시지(oneof 필드 등)로 직렬화할지, 또는 새로운 gRPC 전용 메시지를 정의할지 결정합니다.
- Back-pressure / flow-control 은 gRPC/HTTP2의 스트림 flow-control 을 최대한 활용하고,
추가 application-level windowing 이 필요하면 최소한으로만 도입합니다.
#### 3.3A Stream-based DTLS Tunneling / 스트림 기반 DTLS 터널링
- [ ] gRPC 터널 기반 E2E 플로우 정의/테스트 계획
- 하나의 gRPC 스트림 위에서:
- 동시에 여러 정적 리소스(`/css`, `/js`, `/img`) 요청,
- 큰 응답(수 MB 파일)과 작은 응답(API JSON)이 섞여 있는 시나리오,
- 클라이언트 재시작/네트워크 단절 후 재연결 시나리오
를 포함하는 테스트 플랜을 작성합니다.
- 기대 동작:
- 느린 요청이 있더라도 다른 요청이 **같은 TCP 연결/스트림 집합 내에서** 과도하게 지연되지 않을 것.
- 서버/클라이언트 로그에 프로토콜 위반 경고(`unexpected frame ...`)가 발생하지 않을 것.
초기 HTTP 터널링 설계는 **단일 JSON Envelope + 단일 DTLS 쓰기** 방식(요청/응답 바디 전체를 한 번에 전송)이었고,
대용량 응답 바디에서 UDP MTU 한계로 인한 `sendto: message too long` 문제가 발생할 수 있었습니다.
이 한계를 제거하기 위해, 현재 코드는 DTLS 위 애플리케이션 프로토콜을 **스트림/프레임 기반**으로 재설계하여 `StreamOpen` / `StreamData` / `StreamClose` 를 사용합니다.
The initial tunneling model used a **single JSON envelope + single DTLS write per HTTP message**, which could hit UDP MTU limits (`sendto: message too long`) for large bodies.
The current implementation uses a **stream/frame-based** protocol over DTLS (`StreamOpen` / `StreamData` / `StreamClose`), and this section documents its constraints and further improvements (e.g. ARQ).
고려해야 할 제약 / Constraints:
- 전송 계층은 DTLS(pion/dtls)를 유지합니다.
The transport layer must remain DTLS (pion/dtls).
- JSON 기반 단일 Envelope 모델에서 벗어나, HTTP 바디를 안전한 크기의 chunk 로 나누어 전송해야 합니다.
We must move away from the single-envelope JSON model and chunk HTTP bodies under a safe MTU.
- UDP 특성상 일부 프레임 손실/오염에 대비해, **해당 chunk 만 재전송 요청할 수 있는 ARQ 메커니즘**이 필요합니다.
Given UDP characteristics, we need an application-level ARQ so that **only lost/corrupted chunks are retransmitted**.
아래 단계들은 `feature/udp-stream` 브랜치에서 구현할 구체적인 작업 항목입니다.
The following tasks describe concrete work items to be implemented on the `feature/udp-stream` branch.
---
##### 3.3A.1 스트림 프레이밍 프로토콜 설계 (JSON 1단계)
##### 3.3A.1 Stream framing protocol (JSON, phase 1)
- [x] 스트림 프레임 타입 정리 및 확장: [`internal/protocol/protocol.go`](internal/protocol/protocol.go:35)
- 이미 정의된 스트림 관련 타입을 1단계에서 적극 활용합니다.
Reuse the already defined stream-related types in phase 1:
- `MessageTypeStreamOpen`, `MessageTypeStreamData`, `MessageTypeStreamClose`
- [`Envelope`](internal/protocol/protocol.go:52), [`StreamOpen`](internal/protocol/protocol.go:69), [`StreamData`](internal/protocol/protocol.go:80), [`StreamClose`](internal/protocol/protocol.go:86)
- `StreamData` 에 per-stream 시퀀스 번호를 추가합니다.
Add a per-stream sequence number to `StreamData`:
- 예시 / Example:
```go
type StreamData struct {
ID StreamID `json:"id"`
Seq uint64 `json:"seq"` // 0부터 시작하는 per-stream sequence
Data []byte `json:"data"`
}
```
- [x] 스트림 ACK / 재전송 제어 메시지 추가: [`internal/protocol/protocol.go`](internal/protocol/protocol.go:52)
- 선택적 재전송(Selective Retransmission)을 위해 `StreamAck` 메시지와 `MessageTypeStreamAck` 를 추가합니다.
Add `StreamAck` message and `MessageTypeStreamAck` for selective retransmission:
```go
const (
MessageTypeStreamAck MessageType = "stream_ack"
)
type StreamAck struct {
ID StreamID `json:"id"` // 대상 스트림 / target stream
AckSeq uint64 `json:"ack_seq"` // 연속으로 수신 완료한 마지막 Seq / last contiguous sequence
LostSeqs []uint64 `json:"lost_seqs"` // 누락된 시퀀스 목록(선택) / optional list of missing seqs
WindowSize uint32 `json:"window_size"` // 선택: 허용 in-flight 프레임 수 / optional receive window
}
```
- [`Envelope`](internal/protocol/protocol.go:52)에 `StreamAck *StreamAck` 필드를 추가합니다.
Extend `Envelope` with a `StreamAck *StreamAck` field.
- [x] MTU-safe chunk 크기 정의
- DTLS/UDP 헤더 및 Protobuf/length-prefix 오버헤드를 고려해 안전한 payload 크기(4KiB)를 상수로 정의합니다.
Define a safe payload size constant (4KiB) considering DTLS/UDP headers and Protobuf/length-prefix framing.
- 이 값은 [`internal/protocol/protocol.go`](internal/protocol/protocol.go:32) 의 `StreamChunkSize` 로 정의되었습니다.
Implemented as `StreamChunkSize` in [`internal/protocol/protocol.go`](internal/protocol/protocol.go:32).
- 이후 HTTP 바디 스트림 터널링 구현 시, 모든 `StreamData.Data` 는 이 크기 이하 chunk 로 잘라 전송해야 합니다.
In the stream tunneling implementation, every `StreamData.Data` must be sliced into chunks no larger than this size.
---
##### 3.3A.2 애플리케이션 레벨 ARQ 설계 (Selective Retransmission)
##### 3.3A.2 Application-level ARQ (Selective Retransmission)
- [x] 수신 측 ARQ 상태 관리 구현
- 스트림별로 `expectedSeq`, out-of-order chunk 버퍼(`received`), 누락 시퀀스 집합(`lost`)을 유지하면서,
in-order / out-of-order 프레임을 구분해 HTTP 바디 버퍼에 순서대로 쌓습니다.
- For each stream, maintain `expectedSeq`, an out-of-order buffer (`received`), and a lost-sequence set (`lost`),
delivering in-order frames directly to the HTTP body buffer while buffering/reordering out-of-order ones.
- [x] 수신 측 StreamAck 전송 정책 구현
- 각 `StreamData` 수신 시점에 `AckSeq = expectedSeq - 1` 과 현재 윈도우에서 누락된 시퀀스 일부(`LostSeqs`, 상한 개수 적용)를 포함한
`StreamAck{AckSeq, LostSeqs}` 를 전송해 선택적 재전송을 유도합니다.
- On every `StreamData` frame, send `StreamAck{AckSeq, LostSeqs}` where `AckSeq = expectedSeq - 1` and `LostSeqs`
contains a bounded set (up to a fixed limit) of missing sequence numbers in the current receive window.
- [x] 송신 측 재전송 로직 구현 (StreamAck 기반)
- 응답 스트림 송신 측에서 스트림별 `streamSender` 를 두고, `outstanding[seq] = payload` 로 아직 Ack 되지 않은 프레임을 추적합니다.
- `StreamAck{AckSeq, LostSeqs}` 수신 시:
- `seq <= AckSeq` 인 항목은 모두 제거하고,
- `LostSeqs` 에 포함된 시퀀스에 대해서만 `StreamData{ID, Seq, Data}` 를 재전송합니다.
- A per-stream `streamSender` tracks `outstanding[seq] = payload` for unacknowledged frames. Upon receiving
`StreamAck{AckSeq, LostSeqs}`, it deletes all `seq <= AckSeq` and retransmits only frames whose sequence
numbers appear in `LostSeqs`.
> Note: 현재 구현은 StreamAck 기반 **선택적 재전송(Selective Retransmission)** 까지 포함하며,
> 별도의 RTO(재전송 타이머) 기반 백그라운드 재전송 루프는 향후 확장 여지로 남겨둔 상태입니다.
> Note: The current implementation covers StreamAck-based **selective retransmission**; a separate RTO-based
> background retransmission loop is left as a potential future enhancement.
---
##### 3.3A.3 HTTP ↔ 스트림 매핑 (서버/클라이언트)
##### 3.3A.3 HTTP ↔ stream mapping (server/client)
- [x] 서버 → 클라이언트 요청 스트림: [`cmd/server/main.go`](cmd/server/main.go:200)
- `ForwardHTTP` 는 스트림 기반 HTTP 요청/응답을 처리하도록 구현되어 있으며, 동작은 다음과 같습니다.
`ForwardHTTP` is implemented in stream mode and behaves as follows:
- HTTP 요청 수신 시:
- 새로운 `StreamID` 를 발급합니다 (세션별 증가).
Generate a new `StreamID` per incoming HTTP request on the DTLS session.
- `StreamOpen` 전송:
- 요청 메서드/URL/헤더를 [`StreamOpen`](internal/protocol/protocol.go:69) 의 `Header` 혹은 pseudo-header 로 encode.
Encode method/URL/headers into `StreamOpen.Header` or a pseudo-header scheme.
- 요청 바디를 읽으면서 `StreamData{ID, Seq, Data}` 를 지속적으로 전송합니다.
Read the HTTP request body and send it as a sequence of `StreamData` frames.
- 바디 종료 시 `StreamClose{ID, Error:""}` 를 전송합니다.
When the body ends, send `StreamClose{ID, Error:""}`.
- 응답 수신:
- 클라이언트에서 오는 역방향 `StreamOpen` 으로 HTTP status/header 를 수신하고,
이를 `http.ResponseWriter` 에 반영합니다.
Receive response status/headers via reverse-direction `StreamOpen` and map them to `http.ResponseWriter`.
- 연속되는 `StreamData` 를 수신할 때마다 `http.ResponseWriter.Write` 로 chunk 를 바로 전송합니다.
For each `StreamData`, write the chunk directly to the HTTP response.
- `StreamClose` 수신 시 응답 종료 및 스트림 자원 정리.
On `StreamClose`, finish the response and clean up per-stream state.
- [x] 클라이언트에서의 요청 처리 스트림: [`internal/proxy/client.go`](internal/proxy/client.go:200)
- 서버로부터 들어오는 `StreamOpen{ID, ...}` 을 수신하면,
새로운 goroutine 을 띄워 해당 ID에 대한 로컬 HTTP 요청을 수행합니다.
On receiving `StreamOpen{ID, ...}` from the server, spawn a goroutine to handle the local HTTP request for that stream ID.
- 스트림별로 `io.Pipe` 또는 채널 기반 바디 리더를 준비하고,
`StreamData` 프레임을 수신할 때마다 이 파이프에 쓰도록 합니다.
Prepare an `io.Pipe` (or channel-backed reader) per stream and write incoming `StreamData` chunks into it.
- 로컬 HTTP 클라이언트 응답은 반대로:
For the local HTTP client response:
- 응답 status/header → `StreamOpen` (client → server)
- 응답 바디 → 여러 개의 `StreamData`
- 종료 시점에 `StreamClose` 전송
Send `StreamOpen` (status/headers), then a sequence of `StreamData`, followed by `StreamClose` when done.
---
##### 3.3A.4 JSON → 바이너리 직렬화로의 잠재적 전환 (2단계)
##### 3.3A.4 JSON → binary serialization (potential phase 2)
- [x] JSON 기반 스트림 프로토콜의 1단계 구현/안정화 이후, 직렬화 포맷 재검토 및 Protobuf 전환
- 현재는 JSON 대신 Protobuf length-prefix `Envelope` 포맷을 기본으로 사용합니다.
The runtime now uses a Protobuf-based, length-prefixed `Envelope` format instead of JSON.
- HTTP/스트림 payload 는 여전히 MTU-safe 크기(예: 4KiB, `StreamChunkSize`)로 제한되어 있어, 단일 프레임이 과도하게 커지지 않습니다.
HTTP/stream payloads remain bounded to an MTU-safe size (e.g. 4KiB via `StreamChunkSize`), so individual frames stay small.
- [x] length-prefix 이진 프레임(Protobuf)으로 전환
- 동일한 logical model (`StreamOpen` / `StreamData(seq)` / `StreamClose` / `StreamAck`)을 유지한 채,
wire-format 을 Protobuf length-prefix binary 프레이밍으로 교체했고, 이는 `protobufCodec` 으로 구현되었습니다.
We now keep the same logical model while using Protobuf length-prefixed framing via `protobufCodec`.
- [x] 이 전환은 `internal/protocol` 내 직렬화 레이어를 얇은 abstraction 으로 감싸 구현했습니다.
- [`internal/protocol/codec.go`](internal/protocol/codec.go:130) 에 `WireCodec` 인터페이스와 Protobuf 기반 `DefaultCodec` 을 도입해,
호출자는 `protocol.DefaultCodec` 만 사용하고, JSON codec 은 보조 용도로만 남아 있습니다.
In [`internal/protocol/codec.go`](internal/protocol/codec.go:130), the `WireCodec` abstraction and Protobuf-based `DefaultCodec` allow callers to use only `protocol.DefaultCodec` while JSON remains as an auxiliary codec.
> Note: 기존 DTLS 기반 스트림/ARQ/멀티플렉싱(3.3A/3.3B)의 작업 내역은
> 구현 경험/아이디어 참고용으로만 유지하며, 신규 기능/운영 계획은 gRPC 터널을 기준으로 진행합니다.
---
@@ -435,7 +299,7 @@ The following tasks describe concrete work items to be implemented on the `featu
### 3.6 Hardening / 안정성 & 구성
- [ ] 설정 유효성 검사 추가
- [x] 설정 유효성 검사 추가
- 필수 env 누락/오류에 대한 명확한 에러 메시지.
- [ ] 에러 처리/재시도 정책

197
protocol.md Normal file
View File

@@ -0,0 +1,197 @@
# HopGate gRPC Tunnel Protocol
이 문서는 HopGate 서버–클라이언트 사이의 gRPC 기반 HTTP 터널링 규약을 정리합니다. (ko)
This document describes the gRPC-based HTTP tunneling protocol between HopGate server and clients. (en)
## 1. Transport Overview / 전송 개요
- Transport: TCP + TLS(HTTPS) + HTTP/2 + gRPC
- Single long-lived bi-directional gRPC stream per client: `OpenTunnel`
- Application payload type: `Envelope` (from `internal/protocol/hopgate_stream.proto`)
- HTTP requests/responses are multiplexed as logical streams identified by `StreamID`.
gRPC service (conceptual):
```proto
service HopGateTunnel {
rpc OpenTunnel (stream Envelope) returns (stream Envelope);
}
```
## 2. Message Types / 메시지 타입
Defined in `internal/protocol/hopgate_stream.proto`:
- `HeaderValues`
- Wraps repeated header values: `map<string, HeaderValues>`
- `Request` / `Response`
- Simple single-message HTTP representation (not used in the streaming tunnel path initially).
- `StreamOpen`
- Opens a new logical stream for HTTP request/response (or other protocols in the future).
- `StreamData`
- Carries body chunks for a stream (`id`, `seq`, `data`).
- `StreamClose`
- Marks the end of a stream (`id`, `error`).
- `StreamAck`
- Legacy ARQ/flow-control hint for UDP/DTLS; in gRPC tunnel it is reserved/optional.
- `Envelope`
- Top-level container with `oneof payload` of the above types.
In the gRPC tunnel, `Envelope` is the only gRPC message type used on the `OpenTunnel` stream.
## 3. Logical Streams and StreamID / 논리 스트림과 StreamID
- A single `OpenTunnel` gRPC stream multiplexes many **logical streams**.
- Each logical stream corresponds to one HTTP request/response pair.
- Logical streams are identified by `StreamOpen.id` (text StreamID).
- The server generates unique IDs per gRPC connection:
- HTTP streams: `"http-{n}"` where `n` is a monotonically increasing counter.
- Control stream: `"control-0"` (special handshake/metadata stream).
Within a gRPC connection:
- Multiple `StreamID`s may be active concurrently.
- Frames with different StreamIDs may be arbitrarily interleaved.
- Order within a stream is tracked by `StreamData.seq` (starting at 0).
## 4. HTTP Request Mapping (Server → Client) / HTTP 요청 매핑
When the public HTTPS reverse-proxy (`cmd/server/main.go`) receives an HTTP request for a domain that is bound
to a client tunnel, it serializes the request into a logical stream as follows.
### 4.1 StreamOpen (request metadata and headers)
- `StreamOpen.id`
- New unique StreamID: `"http-{n}"`.
- `StreamOpen.service_name`
- Logical service selection on the client (e.g., `"web"`).
- `StreamOpen.target_addr`
- Optional explicit local target address on the client (e.g., `"127.0.0.1:8080"`).
- `StreamOpen.header`
- Contains HTTP request headers and pseudo-headers:
- Pseudo-headers:
- `X-HopGate-Method`: HTTP method (e.g., `"GET"`, `"POST"`).
- `X-HopGate-URL`: original URL path + query (e.g., `"/api/v1/foo?bar=1"`).
- `X-HopGate-Host`: Host header value.
- Other keys:
- All remaining HTTP headers from the incoming request, copied as-is into the map.
### 4.2 StreamData* (request body chunks)
- If the request has a body, the server chunks it into fixed-size pieces.
- Chunk size: `protocol.StreamChunkSize` (currently 4 KiB).
- For each chunk:
- `StreamData.id = StreamOpen.id`
- `StreamData.seq` increments from 0, 1, 2, …
- `StreamData.data` contains the raw bytes.
### 4.3 StreamClose (end of request body)
- After sending all body chunks, the server sends a `StreamClose`:
- `StreamClose.id = StreamOpen.id`
- `StreamClose.error` is empty on success.
- If there was an application-level error while reading the body, `error` contains a short description.
The client reconstructs the HTTP request by:
- Reassembling the URL and headers from the `StreamOpen` pseudo-headers and header map.
- Concatenating `StreamData.data` in `seq` order into the request body.
- Treating `StreamClose` as the end-of-stream marker.
## 5. HTTP Response Mapping (Client → Server) / HTTP 응답 매핑
The client receives `StreamOpen` + `StreamData*` + `StreamClose`, performs a local HTTP request to its
configured target (e.g., `http://127.0.0.1:8080`), then returns an HTTP response using the same StreamID.
### 5.1 StreamOpen (response headers and status)
- `StreamOpen.id`
- Same as the request StreamID.
- `StreamOpen.header`
- Contains response headers and a pseudo-header for status:
- Pseudo-header:
- `X-HopGate-Status`: HTTP status code as a string (e.g., `"200"`, `"502"`).
- Other keys:
- All HTTP response headers from the local backend, copied as-is.
### 5.2 StreamData* (response body chunks)
- The client reads the local HTTP response body and chunks it into 4 KiB pieces (same `StreamChunkSize`).
- For each chunk:
- `StreamData.id = StreamOpen.id`
- `StreamData.seq` increments from 0.
- `StreamData.data` contains the raw bytes.
### 5.3 StreamClose (end of response body)
- When the local backend response is fully read, the client sends a `StreamClose`:
- `StreamClose.id` is the same StreamID.
- `StreamClose.error`:
- Empty string on success.
- Short error description if the local HTTP request/response failed (e.g., connect timeout).
The server reconstructs the HTTP response by:
- Parsing `X-HopGate-Status` into an integer HTTP status code.
- Copying other headers into the outgoing response writer (with some security headers overridden by the server).
- Concatenating `StreamData.data` in `seq` order into the HTTP response body.
- Considering `StreamClose.error` for logging/metrics and possibly mapping to error pages if needed.
## 6. Control / Handshake Stream / 컨트롤 스트림
Before any HTTP request streams are opened, the client sends a single **control stream** to authenticate
and describe itself.
- `StreamOpen` (control):
- `id = "control-0"`
- `service_name = "control"`
- `header` contains:
- `X-HopGate-Domain`: domain this client is responsible for.
- `X-HopGate-API-Key`: client API key for the domain.
- `X-HopGate-Local-Target`: default local target such as `127.0.0.1:8080`.
- No `StreamData` is required for the control stream in the initial design.
- The server can optionally reply with its own control `StreamOpen/Close` to signal acceptance/rejection.
On the server side:
- `grpcTunnelServer.OpenTunnel` should:
1. Wait for the first `Envelope` with `StreamOpen.id == "control-0"`.
2. Extract domain, api key, and local target from the headers.
3. Call the ent-based `DomainValidator` to validate `(domain, api_key)`.
4. If validation succeeds, register this gRPC stream as the active tunnel for that domain.
5. If validation fails, log and close the gRPC stream.
Once the control stream handshake completes successfully, the server may start multiplexing multiple
HTTP request streams (`http-0`, `http-1`, …) over the same `OpenTunnel` connection.
## 7. Multiplexing Semantics / 멀티플렉싱 의미
- A single TCP + TLS + HTTP/2 + gRPC connection carries:
- One long-lived `OpenTunnel` gRPC bi-di stream.
- Within it, many logical streams identified by `StreamID`.
- The server can open multiple HTTP streams concurrently for a given client:
- Example: `http-0` for `/css/app.css`, `http-1` for `/api/users`, `http-2` for `/img/logo.png`.
- Frames for these IDs can interleave arbitrarily on the wire.
- Per-stream ordering is preserved by combining `seq` ordering and the reliability of TCP/gRPC.
- Slow or large responses on one stream should not prevent other streams from making progress,
because gRPC/HTTP2 handles stream-level flow control and scheduling.
## 8. Flow Control and StreamAck / 플로우 컨트롤 및 StreamAck
- The gRPC tunnel runs over TCP/HTTP2, which already provides:
- Reliable, in-order delivery.
- Connection-level and stream-level flow control.
- Therefore, application-level selective retransmission is **not required** for the gRPC tunnel.
- `StreamAck` remains defined in the proto for backward compatibility with the DTLS design and
as a potential future hint channel (e.g., window size hints), but is not used in the initial gRPC tunnel.
## 9. Security Considerations / 보안 고려사항
- TLS:
- In production, the server uses ACME-issued certificates, and clients validate the server certificate
using system Root CAs and SNI (`ServerName`).
- In debug mode, clients may use `InsecureSkipVerify: true` to allow local/self-signed certs.
- Authentication:
- Application-level authentication relies on `(domain, api_key)` pairs sent via the control stream headers.
- The server must validate these pairs against the `Domain` table using `DomainValidator`.
- Authorization and isolation:
- Each gRPC tunnel is bound to a single domain (or a defined set of domains) after successful control handshake.
- HTTP requests for other domains must not be forwarded over this tunnel.
이 규약을 기준으로 서버/클라이언트 구현을 정렬하면, 하나의 gRPC `OpenTunnel` 스트림 위에서
여러 HTTP 요청을 안정적으로 멀티플렉싱하면서도, 도메인/API 키 기반 인증과 TLS 보안을 함께 유지할 수 있습니다.

57
tools/build_server_image.sh Executable file
View File

@@ -0,0 +1,57 @@
#!/bin/sh
# POSIX sh 버전의 hop-gate 서버 이미지 빌드 스크립트.
# VERSION 은 현재 git 커밋의 7글자 SHA 를 사용합니다.
set -eu
# 스크립트 위치 기준 리포 루트 계산
SCRIPT_DIR=$(cd "$(dirname "$0")" >/dev/null 2>&1 && pwd)
REPO_ROOT="${SCRIPT_DIR}/.."
cd "${REPO_ROOT}"
# 현재 커밋 7글자 SHA, git 정보가 없으면 dev
VERSION=$(git rev-parse --short=7 HEAD 2>/dev/null || echo dev)
# 기본 이미지 이름 (첫 번째 인자로 override 가능)
# 예:
# ./tools/build_server_image.sh
# ./tools/build_server_image.sh my/image/name
IMAGE_NAME=${1:-ghcr.io/dalbodeule/hop-gate}
echo "Building hop-gate server image"
echo " context : ${REPO_ROOT}"
echo " image : ${IMAGE_NAME}:${VERSION}"
echo " version : ${VERSION}"
# docker buildx 사용 가능 여부 확인
if command -v docker >/dev/null 2>&1 && docker buildx version >/dev/null 2>&1; then
BUILD_CMD="docker buildx build"
else
BUILD_CMD="docker build"
fi
# 선택적 환경 변수:
# PLATFORM=linux/amd64,linux/arm64 # buildx 용
# PUSH=1 # buildx --push
PLATFORM_ARGS=""
if [ "${PLATFORM-}" != "" ]; then
PLATFORM_ARGS="--platform ${PLATFORM}"
fi
PUSH_ARGS=""
if [ "${PUSH-}" != "" ]; then
PUSH_ARGS="--push"
fi
# 실제 빌드 실행
# shellcheck disable=SC2086
${BUILD_CMD} \
${PLATFORM_ARGS} \
-f Dockerfile.server \
--build-arg VERSION="${VERSION}" \
-t "${IMAGE_NAME}:${VERSION}" \
-t "${IMAGE_NAME}:latest" \
${PUSH_ARGS} \
.