Compare commits

47 Commits

Author SHA1 Message Date
JinU Choi
8e2f1e68cb Merge pull request #21 from dalbodeule/develop
[release] 1.0.0
2025-12-11 19:40:08 +09:00
JinU Choi
983332b3d8 Merge pull request #20 from dalbodeule/feature/grpc-tunneling
[feat] DTLS 기반 HTTP 터널을 gRPC 기반 HTTP/2 터널로 전환
2025-12-11 19:38:50 +09:00
dalbodeule
38f05db0dc [feat](client): add local HTTP proxying for gRPC-based tunnels
- Enhanced gRPC client with logic to forward incoming tunnel streams as HTTP requests to a local target.
- Implemented per-stream state management for matching StreamOpen/StreamData/StreamClose to HTTP requests/responses.
- Added mechanisms to assemble HTTP requests, send them locally, and respond via tunnel streams.
- Introduced a configurable HTTP client with proper headers and connection settings for robust forwarding.
2025-12-11 19:05:26 +09:00
dalbodeule
a41bd34179 [feat](server, errorpages): add gRPC-based tunnel session handling and favicon support
- Implemented gRPC-based tunnel sessions for multiplexing HTTP requests via `grpcTunnelSession` with features like `recvLoop`, `send`, and per-stream state management.
- Registered and unregistered tunnels for domains, replacing DTLS-based sessions for improved scalability and maintainability.
- Integrated domain validation checks during gRPC tunnel handshake with configurable validator support.
- Modified static error pages (`400.html`, `404.html`, `502.html`, `504.html`, `500.html`, `525.html`) to include favicon linking, enhancing error page presentation.
2025-12-11 18:49:56 +09:00
dalbodeule
e388e5a272 [debug](server): add temporary debug log for gRPC routing inspection
- Added a debug log in `grpcOrHTTPHandler` to output protocol, content type, host, and path information.
2025-12-11 17:10:56 +09:00
dalbodeule
d93440f4b3 [chore](docker): remove unused DTLS-related UDP port mapping
- Removed `443/udp` from `EXPOSE` in `Dockerfile.server`.
- Removed UDP port mapping for `443` in `docker-compose.yml`.
2025-12-11 17:00:36 +09:00
dalbodeule
1492a1a82c [feat](protocol): update go_package path and regen related Protobuf types
- Changed `go_package` option in `hopgate_stream.proto` to `internal/protocol/pb;pb`.
- Regenerated `hopgate_stream.pb.go` with updated package path to align with new structure.
- Added `protocol.md` documenting the gRPC-based HTTP tunneling protocol.
2025-12-11 17:00:12 +09:00
dalbodeule
64f730d2df [feat](protocol, client, server): replace DTLS with gRPC for tunnel implementation
- Introduced gRPC-based tunnel design for bi-directional communication, replacing legacy DTLS transport.
- Added `HopGateTunnel` gRPC service with client and server logic for `OpenTunnel` stream handling.
- Updated client to use gRPC tunnel exclusively, including experimental entry point for stream-based HTTP proxying.
- Removed DTLS-specific client, server, and related dependencies (`pion/dtls`).
- Adjusted `cmd/server` to route gRPC and HTTP/HTTPS traffic dynamically on shared ports.
2025-12-11 16:48:17 +09:00
dalbodeule
17839def69 [feat](docs): update ARCHITECTURE.md to reflect gRPC-based tunnel design
- Replaced legacy DTLS transport details with gRPC/HTTP2 tunnel architecture.
- Updated server and client roles to describe gRPC bi-directional stream-based request/response handling.
- Revised internal component descriptions and flow diagrams to align with gRPC-based implementation.
- Marked DTLS sections as deprecated and documented planned removal in future versions.
2025-12-11 16:07:15 +09:00
dalbodeule
faea425e57 [feat](client, server): enable concurrent HTTP stream handling per DTLS session
- Removed session-level serialization lock for HTTP requests (`requestMu`) to support concurrent stream processing.
- Introduced centralized `readLoop` and `streamReceiver` design for stream demultiplexing and individual stream handling.
- Updated client to handle multiple `StreamOpen/StreamData/StreamClose` per session concurrently with per-stream ARQ state.
- Enhanced server logic for efficient HTTP mapping and response streaming in a concurrent stream environment.
2025-12-10 13:05:12 +09:00
JinU Choi
9b7369233c Merge pull request #19 from dalbodeule/copilot/fix-dtls-buffer-error
Fix DTLS buffer size, concurrent request handling, and client frame robustness
2025-12-10 01:26:19 +09:00
dalbodeule
661f8b6413 [feat](server): serialize HTTP requests per DTLS session with session-level mutex
- Added `requestMu` mutex in `dtlsSessionWrapper` to serialize HTTP request handling per DTLS session.
- Prevents interleaved HTTP request streams on clients that process one stream at a time.
- Updated `ForwardHTTP` logic to lock and unlock around HTTP request handling for safe serialization.
- Documented behavior and rationale in `progress.md` for future multiplexing enhancements.
2025-12-10 01:25:56 +09:00
dalbodeule
05dfff21f6 [feat](server, protocol): add sender and receiver ARQ for reliable HTTP stream delivery
- Implemented application-level ARQ with selective retransmission for server-to-client streams, leveraging `StreamAck` logic.
- Added sender-side ARQ state in `streamSender` for tracking and resending unacknowledged frames.
- Introduced receiver-side ARQ with `AckSeq` and `LostSeqs` for handling out-of-order and lost frames.
- Enhanced `dtlsSessionWrapper` to support ARQ management and seamless stream-based DTLS tunneling.
2025-12-10 01:12:58 +09:00
copilot-swe-agent[bot]
446a265fa2 Improve code readability in client stream frame handlers
- Extract stream IDs before logging calls for better readability
- Remove unnecessary anonymous functions
- Address code review feedback

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:46:00 +00:00
copilot-swe-agent[bot]
56916c75f4 Fix client handling of unexpected stream frames at top level
- Add handling for StreamData and StreamClose at top level in client StartLoop
- Log warning and continue instead of crashing on unexpected frames
- Fixes "unsupported envelope type stream_close" error when frames arrive out of order

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:44:31 +00:00
copilot-swe-agent[bot]
887c5fcdff Address code review feedback: cleanup unused code and improve channel safety
- Remove unused streamResponse struct
- Add named constant for response channel buffer size
- Add default case to prevent blocking in readLoop select
- Fix potential double-close of channels during cleanup
- Improve comments explaining concurrency patterns

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:24:56 +00:00
copilot-swe-agent[bot]
ff38ef2828 Fix concurrent request handling with stream multiplexing
- Add channel-based multiplexing to handle concurrent HTTP requests
- Implement background readLoop to dispatch responses to correct streams
- Remove mutex bottleneck that was serializing all requests
- Fixes "unexpected stream_data/stream_open for id" errors with concurrent requests

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:22:29 +00:00
copilot-swe-agent[bot]
1292df33e5 Fix DTLS buffer size issue by wrapping sessions with buffered readers
- Add dtlsReadBufferSize constant (8KB) matching pion/dtls limits
- Wrap DTLS sessions with bufio.Reader in client and server code
- Update tests to use buffered readers for datagram-based connections
- All tests passing successfully

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 14:07:15 +00:00
copilot-swe-agent[bot]
412b59f420 Initial plan 2025-12-09 13:59:02 +00:00
dalbodeule
1847a264cb [fix](protocol): improve Protobuf decoding with precise payload reading and clarification
- Refactored `Decode` to use `io.ReadFull` for accurate length-prefix and payload reading.
- Simplified logic to avoid mismatched length issues and clarified comments for maintainability.
2025-12-09 20:11:21 +09:00
JinU Choi
d4d6615c0e Merge pull request #18 from dalbodeule/copilot/fix-protobuf-length-prefix-framing
Fix DTLS protobuf codec for UDP datagram boundaries
2025-12-09 20:03:32 +09:00
copilot-swe-agent[bot]
a00c001b49 Improve test documentation for mock datagram connection
Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 10:51:44 +00:00
copilot-swe-agent[bot]
76423627e9 Fix DTLS protobuf codec framing for datagram boundaries
- Modified protobufCodec.Encode() to combine length prefix and protobuf data into a single buffer and write in one call
- Modified protobufCodec.Decode() to read entire datagram in a single Read call
- Added comprehensive tests for datagram-based codec behavior
- Fixes issue #17: proto: cannot parse invalid wire-format data error in DTLS

Co-authored-by: dalbodeule <11470513+dalbodeule@users.noreply.github.com>
2025-12-09 10:49:37 +00:00
copilot-swe-agent[bot]
9a70256d89 Initial plan 2025-12-09 10:44:22 +00:00
dalbodeule
852a22b8d8 [refactor](build): migrate build_server_image.sh to POSIX sh and improve build options
- Rewrote the script for POSIX compliance (`bash` to `sh`).
- Enhanced environment variable handling for optional arguments (`PLATFORM`, `PUSH`).
- Improved readability and added detailed inline comments for maintainability.
2025-12-09 18:45:22 +09:00
dalbodeule
c295d8c20d build_server_image.sh add +x 2025-12-09 18:41:45 +09:00
dalbodeule
1336c540d0 [feat](build): add versioned Docker image build script and version injection
- Introduced `tools/build_server_image.sh` for building versioned server images with support for multi-arch builds.
- Added `VERSION` injection via `-ldflags` in Dockerfile and Go binaries for both server and client.
- Updated workflows and Makefile to ensure consistent version tagging during builds.
2025-12-09 18:41:00 +09:00
dalbodeule
3402616c3e [feat](protocol): regenerate Protobuf Go types from updated hopgate_stream.proto
- Generated `hopgate_stream.pb.go` based on the latest schema for DTLS stream tunneling.
- Added new Protobuf message types, including `Request`, `Response`, `StreamOpen`, `StreamData`, `StreamAck`, `StreamClose`, and `Envelope`.
2025-12-09 18:14:33 +09:00
dalbodeule
715cf6b636 [fix](protocol): improve Protobuf codec buffering for DTLS compatibility
- Updated `Decode` to wrap `io.Reader` in a sufficiently large `bufio.Reader` when handling DTLS sessions, preventing "buffer is too small" errors.
- Enhanced length-prefix reading logic to ensure safe handling of Protobuf envelopes during DTLS stream processing.
- Clarified comments and fixed minor formatting inconsistencies in Protobuf codec documentation.
2025-12-09 17:23:02 +09:00
dalbodeule
dfc266f61a [feat](server, client): add runtime validation for critical environment variables
- Introduced `getEnvOrPanic` helper to enforce non-empty required environment variables.
- Added strict validation for server (`HOP_SERVER_*`) and client (`HOP_CLIENT_*`) configurations at startup.
- Updated `.env` loader to prioritize OS env vars over `.env` file values.
- Enhanced structured logging for validated environment variables.
- Improved Makefile with `check-env-server` and `check-env-client` targets for build-time validation.
2025-12-09 00:54:42 +09:00
JinU Choi
ab2bc38e32 Merge pull request #16 from dalbodeule/feature/udp-stream
[enchancement] udp stream and protobuf apply
2025-12-09 00:51:36 +09:00
dalbodeule
5c3be0a3bb [feat](client): implement application-level ARQ with selective retransmission
- Added `StreamAck`-based selective retransmission logic for reliable stream delivery.
- Introduced per-stream ARQ states (`expectedSeq`, `lost`, `received`) for out-of-order handling and lost frame tracking.
- Implemented mechanisms to send `StreamAck` with `AckSeq` and `LostSeqs` attributes in response to `StreamData`.
- Enhanced retransmission logic for unacknowledged frames in `streamSender`, ensuring robust recovery for lost data.
- Updated progress notes in `progress.md` to reflect ARQ implementation.
2025-12-09 00:15:03 +09:00
dalbodeule
5e94dd7aa9 [feat](server, client): implement streaming-based HTTP tunnel with DTLS sessions
- Replaced single-envelope HTTP handling with stream-based tunneling (`StreamOpen`, `StreamData`, and `StreamClose`) for HTTP-over-DTLS.
- Added unique StreamID generation for per-session HTTP requests.
- Improved client and server logic for handling chunked body transmissions and reverse stream responses.
- Enhanced pseudo-header handling for HTTP metadata in tunneling.
- Updated error handling for local HTTP failures, ensuring proper stream-based responses.
2025-12-08 23:05:45 +09:00
dalbodeule
798ad75e39 [feat](protocol): enforce 4KiB hard limit on Protobuf body and stream payloads
- Added safeguards to restrict HTTP body and stream payload sizes to 4KiB (`StreamChunkSize`) in the Protobuf codec.
- Updated client logic to apply consistent limits for streaming and non-streaming scenarios.
- Improved error handling with clear messages for oversized payloads.
2025-12-08 22:38:34 +09:00
JinU Choi
65279323ed Merge pull request #15 from dalbodeule/feature/missing-env
[enchancement] Env enchancement.
2025-12-08 22:26:28 +09:00
dalbodeule
c5b3c11df0 [refactor](build, Makefile): drop godotenv dependency and fix Korean grammar in env checks
- Removed `godotenv` dependency from `go.mod` as it's no longer used.
- Corrected Korean grammar in Makefile environment variable validation messages.
2025-12-08 22:26:08 +09:00
dalbodeule
c81e2c4a81 [docs](README.md): update transport and tunneling details for Protobuf-based messaging
- Updated description of server-client transport to use Protobuf-based, length-prefixed envelopes.
- Revised notes on handling large HTTP bodies and outlined plans for stream/frame-based tunneling.
- Updated `progress.md` with finalized implementation of MTU-safe chunk size constant.
2025-12-08 21:30:45 +09:00
dalbodeule
eac39550e2 [feat](protocol): extend Protobuf codec with stream-based message support
- Added support for `StreamOpen`, `StreamData`, `StreamClose`, and `StreamAck` types in the Protobuf codec.
- Defined new pseudo-header constants for HTTP-over-stream tunneling.
- Introduced `StreamChunkSize` constant for MTU-safe payload sizes (4 KiB).
- Updated encoding and decoding logic to handle stream-based types seamlessly.
2025-12-08 21:25:26 +09:00
dalbodeule
99be2d2e31 [feat](protocol): implement Protobuf codec and integrate into default WireCodec
- Introduced `protobufCodec` supporting length-prefixed Protobuf serialization/deserialization.
- Replaced JSON-based `DefaultCodec` with Protobuf-based implementation.
- Updated generated Protobuf Go types, aligning with `go_package` updates in `hopgate_stream.proto`.
- Added constants and safeguards for Protobuf envelope size limits.
- Modified `Makefile` to accommodate updated Protobuf generation logic.
2025-12-08 20:47:12 +09:00
dalbodeule
1fa5e900f8 [feat](protocol): add Protobuf schemas and code generation for hopgate streams
- Defined `hopgate_stream.proto` with message definitions for stream-based DTLS tunneling, including `Request`, `Response`, `StreamOpen`, `StreamData`, `StreamAck`, and `StreamClose`.
- Added `Envelope` container for top-level message encapsulation.
- Integrated Protobuf code generation into the `Makefile` using `protoc` with `protoc-gen-go`.
- Generated Go types under `internal/protocol/pb`.
2025-12-08 20:30:53 +09:00
dalbodeule
bf5c3c8f59 [feat](protocol): replace JSON handlers with codec abstraction
- Introduced `WireCodec` interface in `internal/protocol/codec.go` to abstract serialization/deserialization logic.
- Updated server and client to use `DefaultCodec`, replacing direct JSON encoding/decoding.
- Eliminated `bufio.Reader` from session handling, as `DefaultCodec` manages buffering for DTLS sessions.
- Marked related protocol tasks in `progress.md` as complete.
2025-12-08 20:14:36 +09:00
dalbodeule
34bf0eed98 [feat](protocol): redesign application protocol with stream-based DTLS tunneling
- Replaced single-envelope JSON model with a stream/frame-based protocol using `StreamOpen`, `StreamData`, and `StreamClose` for chunked transmission.
- Added application-level ARQ with selective retransmission (`StreamAck`) for reliability over DTLS/UDP.
- Defined MTU-safe chunk sizes and sequence-based flow control to handle large HTTP bodies effectively.
- Updated `internal/protocol` for structured stream message handling, including ACK/NACK support.
- Documented potential transition to binary serialization for performance optimization.
2025-12-08 00:50:13 +09:00
dalbodeule
302acb640d [docs](README): add detailed documentation for .env and environment variable handling
- Documented the custom `.env` loader behavior, prioritization of OS-level environment variables, and validation stages.
- Explained server and client-specific configuration loading process.
- Added best practices for environment variable usage in development and production environments.
2025-12-08 00:41:58 +09:00
dalbodeule
00b47fda8e [refactor](server, client, config): remove godotenv dependency and enhance env var handling
- Replaced `godotenv` with a custom `.env` loader that respects OS-level environment variables.
- Updated server and client initialization to prioritize OS environment variables over `.env` values.
- Improved environment variable validation and logging with structured logs.
- Applied cleaner error handling and removed redundant `log` package usage.
2025-12-08 00:34:34 +09:00
dalbodeule
01cd524abe [feat](server, client, build): integrate dotenv for environment variable management (by @ryu31847)
- Added `github.com/joho/godotenv` for loading `.env` files in server and client.
- Implemented environment variable validation and logging in both main programs.
- Updated Makefile with `.env` export and validation steps for required variables.
- Simplified error handling in `writeErrorPage` rendering logic.
2025-12-08 00:13:30 +09:00
dalbodeule
d9ac388761 [feat](server): add 502 Bad Gateway support and improve error page handling
- Introduced handling for `502 Bad Gateway` errors with a dedicated HTML template.
- Updated `writeErrorPage` logic to include 502 and other new status mappings for custom templates.
- Improved error page rendering by mapping 4xx/5xx status codes to appropriate templates.
2025-12-03 01:38:11 +09:00
dalbodeule
c6b3632784 [feat](protocol): introduce stream-based DTLS tunneling and body size handling
- Designed a stream/frame-based protocol leveraging `StreamOpen`, `StreamData`, and `StreamClose` fields for chunked transmission.
- Addressed DTLS/UDP MTU limits by capping tunneled body sizes to 48 KiB and replacing oversized responses with `502 Bad Gateway`.
- Updated `internal/protocol` to enable safe handling of large HTTP bodies via streaming.
- Documented future work on replacing JSON with binary encoding for improved performance.
2025-12-03 01:34:34 +09:00
32 changed files with 5281 additions and 684 deletions

View File

@@ -57,3 +57,5 @@ jobs:
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
VERSION=${{ github.sha }}

View File

@@ -13,8 +13,13 @@ This document describes the overall architecture of the HopGate system. (en)
- 서버는 80/443 포트를 점유하고, ACME(Let's Encrypt 등)로 TLS 인증서를 자동 발급/갱신합니다. (ko)
- The server listens on ports 80/443 and automatically issues/renews TLS certificates using ACME (e.g. Let's Encrypt). (en)
- 클라이언트는 DTLS를 통해 서버에 연결되고, 서버가 전달한 HTTP 요청을 로컬 서비스(127.0.0.1:PORT)에 대신 보내고 응답을 다시 서버로 전달합니다. (ko)
- Clients connect to the server via DTLS, forward HTTP requests to local services (127.0.0.1:PORT), and send the responses back to the server. (en)
- 전송 계층은 **TCP + TLS(HTTPS) + HTTP/2 + gRPC** 기반의 터널을 사용해 서버–클라이언트 간 HTTP 요청/응답을 멀티플렉싱합니다. (ko)
- The transport layer uses a **TCP + TLS (HTTPS) + HTTP/2 + gRPC**-based tunnel to multiplex HTTP requests/responses between server and clients. (en)
- 클라이언트는 장기 유지 gRPC bi-directional stream 을 통해 서버와 터널을 형성하고,
서버가 전달한 HTTP 요청을 로컬 서비스(127.0.0.1:PORT)에 대신 보내고 응답을 다시 서버로 전달합니다. (ko)
- Clients establish long-lived gRPC bi-directional streams as tunnels to the server,
forward HTTP requests to local services (127.0.0.1:PORT), and send responses back to the server. (en)
- 관리 Plane(REST API)을 통해 도메인 등록/해제 및 클라이언트 API Key 발급을 수행합니다. (ko)
- An admin plane (REST API) is used to register/unregister domains and issue client API keys. (en)
@@ -31,8 +36,7 @@ This document describes the overall architecture of the HopGate system. (en)
├── internal/
│ ├── config/ # shared configuration loader
│ ├── acme/ # ACME certificate management
│ ├── dtls/ # DTLS abstraction & implementation
│ ├── proxy/ # HTTP proxy / tunneling core
│ ├── proxy/ # HTTP proxy / tunneling core (gRPC tunnel)
│ ├── protocol/ # server-client message protocol
│ ├── admin/ # admin plane HTTP handlers
│ └── logging/ # structured logging utilities
@@ -46,11 +50,11 @@ This document describes the overall architecture of the HopGate system. (en)
### 2.1 `cmd/`
- [`cmd/server/main.go`](cmd/server/main.go) — 서버 실행 엔트리 포인트. 서버 설정 로딩, ACME/TLS 초기화, HTTP/HTTPS/DTLS 리스너 시작을 담당합니다. (ko)
- [`cmd/server/main.go`](cmd/server/main.go) — Server entrypoint. Loads configuration, initializes ACME/TLS, and starts HTTP/HTTPS/DTLS listeners. (en)
- [`cmd/server/main.go`](cmd/server/main.go) — 서버 실행 엔트리 포인트. 서버 설정 로딩, ACME/TLS 초기화, HTTP/HTTPS 리스너 및 gRPC 터널 엔드포인트 시작을 담당합니다. (ko)
- [`cmd/server/main.go`](cmd/server/main.go) — Server entrypoint. Loads configuration, initializes ACME/TLS, and starts HTTP/HTTPS listeners plus the gRPC tunnel endpoint. (en)
- [`cmd/client/main.go`](cmd/client/main.go) — 클라이언트 실행 엔트리 포인트. 설정 로딩, DTLS 연결 및 핸드셰이크, 로컬 서비스 프록시 루프를 담당합니다. (ko)
- [`cmd/client/main.go`](cmd/client/main.go) — Client entrypoint. Loads configuration, performs DTLS connection and handshake, and runs the local proxy loop. (en)
- [`cmd/client/main.go`](cmd/client/main.go) — 클라이언트 실행 엔트리 포인트. 설정 로딩, gRPC/HTTP2 터널 연결, 로컬 서비스 프록시 루프를 담당합니다. (ko)
- [`cmd/client/main.go`](cmd/client/main.go) — Client entrypoint. Loads configuration, establishes a gRPC/HTTP2 tunnel to the server, and runs the local proxy loop. (en)
---
@@ -77,32 +81,24 @@ This document describes the overall architecture of the HopGate system. (en)
- Issue/renew TLS certificates for main and proxy domains. (en)
- HTTP-01 / TLS-ALPN-01 챌린지 처리 훅 제공. (ko)
- Provide hooks for HTTP-01 / TLS-ALPN-01 challenges. (en)
- HTTPS/DTLS 리스너에 사용할 `*tls.Config` 제공. (ko)
- Provide `*tls.Config` for HTTPS/DTLS listeners. (en)
- HTTPS 및 gRPC 터널 리스너에 사용할 `*tls.Config` 제공. (ko)
- Provide `*tls.Config` for HTTPS and gRPC tunnel listeners. (en)
---
### 2.4 `internal/dtls`
### 2.4 (Reserved for legacy DTLS prototype)
- DTLS 통신을 추상화하고, pion/dtls 기반 구현 및 핸드셰이크 로직을 포함합니다. (ko)
- Abstracts DTLS communication and includes a pion/dtls-based implementation plus handshake logic. (en)
- 주요 요소 / Main elements: (ko/en)
- `Session`, `Server`, `Client` 인터페이스 — DTLS 위의 스트림과 서버/클라이언트를 추상화. (ko)
- `Session`, `Server`, `Client` interfaces — abstract streams and server/client roles over DTLS. (en)
- `NewPionServer`, `NewPionClient` — pion/dtls 를 사용하는 실제 구현. (ko)
- `NewPionServer`, `NewPionClient` — concrete implementations using pion/dtls. (en)
- `PerformServerHandshake`, `PerformClientHandshake` — 도메인 + 클라이언트 API Key 기반 애플리케이션 레벨 핸드셰이크. (ko)
- `PerformServerHandshake`, `PerformClientHandshake` — application-level handshake based on domain + client API key. (en)
- `NewSelfSignedLocalhostConfig` — 디버그용 localhost self-signed TLS 설정을 생성. (ko)
- `NewSelfSignedLocalhostConfig` — generates a debug-only localhost self-signed TLS config. (en)
> 초기 버전에서 DTLS 기반 터널을 실험했으나, 현재 설계에서는 **gRPC/HTTP2 터널만** 사용합니다.
> DTLS 관련 코드는 점진적으로 제거하거나, 별도 브랜치/히스토리에서만 보존할 예정입니다. (ko)
> Early iterations experimented with a DTLS-based tunnel, but the current design uses **gRPC/HTTP2 tunnels only**.
> Any DTLS-related code is planned to be removed or kept only in historical branches. (en)
---
### 2.5 `internal/protocol`
- 서버와 클라이언트가 DTLS 위에서 주고받는 HTTP 요청/응답 메시지 포맷을 정의합니다. (ko)
- Defines HTTP request/response message formats exchanged over DTLS between server and clients. (en)
- 서버와 클라이언트가 **gRPC/HTTP2 터널 전송 계층** 위에서 주고받는 HTTP 요청/응답 및 스트림 메시지 포맷을 정의합니다. (ko)
- Defines HTTP request/response and stream message formats exchanged over the gRPC/HTTP2 tunnel transport layer. (en)
- 요청 메시지 / Request message: (ko/en)
- `RequestID`, `ClientID`, `ServiceName`, `Method`, `URL`, `Header`, `Body`. (ko/en)
@@ -110,8 +106,15 @@ This document describes the overall architecture of the HopGate system. (en)
- 응답 메시지 / Response message: (ko/en)
- `RequestID`, `Status`, `Header`, `Body`, `Error`. (ko/en)
- 인코딩은 초기에는 JSON을 사용하고, 필요 시 MsgPack/Protobuf 등으로 확장 가능합니다. (ko)
- Encoding starts with JSON and may be extended to MsgPack/Protobuf later. (en)
- 스트림 기반 터널링을 위한 Envelope/Stream 타입: (ko/en)
- [`Envelope`](internal/protocol/protocol.go:64) — 상위 메시지 컨테이너. (ko/en)
- [`StreamOpen`](internal/protocol/protocol.go:94) — 새로운 스트림 오픈 및 헤더/메타데이터 전달. (ko/en)
- [`StreamData`](internal/protocol/protocol.go:104) — 시퀀스 번호(Seq)를 가진 바디 chunk 프레임. (ko/en)
- [`StreamClose`](internal/protocol/protocol.go:143) — 스트림 종료 및 에러 정보 전달. (ko/en)
- [`StreamAck`](internal/protocol/protocol.go:117) — 선택적 재전송(Selective Retransmission)을 위한 ACK/NACK 힌트. (ko/en)
- 이 구조는 Protobuf 기반 length-prefix 프레이밍을 사용하며, gRPC bi-di stream 의 메시지 타입으로 매핑됩니다. (ko)
- This structure uses protobuf-based length-prefixed framing and is mapped onto messages in a gRPC bi-di stream. (en)
---
@@ -128,22 +131,21 @@ This document describes the overall architecture of the HopGate system. (en)
- 도메인/패스 규칙에 따라 적절한 클라이언트와 서비스로 매핑합니다. (ko)
- Map requests to appropriate clients and services based on domain/path rules. (en)
- 요청을 `protocol.Request` 로 직렬화하여 DTLS 세션을 통해 클라이언트로 전송합니다. (ko)
- Serialize the request as `protocol.Request` and send it over a DTLS session to the client. (en)
- 클라이언트로부터 받은 `protocol.Response` 를 HTTP 응답으로 복원하여 외부 사용자에게 반환합니다. (ko)
- Deserialize `protocol.Response` from the client and return it as an HTTP response to the external user. (en)
- 요청/응답`internal/protocol` 의 스트림 메시지(`StreamOpen` / `StreamData` / `StreamClose` 등)로 직렬화하여
서버–클라이언트 간 gRPC bi-di stream 위에서 주고받습니다. (ko)
- Serialize requests/responses into stream messages from `internal/protocol` (`StreamOpen` / `StreamData` / `StreamClose`, etc.)
and exchange them between server and clients over a gRPC bi-di stream. (en)
#### 클라이언트 측 역할 / Client-side role
- DTLS 채널을 통해 서버가 내려보낸 `protocol.Request` 를 수신합니다. (ko)
- Receive `protocol.Request` objects sent by the server over DTLS. (en)
- 서버가 gRPC 터널을 통해 내려보낸 스트림 메시지를 수신합니다. (ko)
- Receive stream messages sent by the server over the gRPC tunnel. (en)
- 로컬 HTTP 서비스(예: 127.0.0.1:8080)에 요청을 전달하고 응답을 수신합니다. (ko)
- Forward these requests to local HTTP services (e.g. 127.0.0.1:8080) and collect responses. (en)
- 응답을 `protocol.Response` 로 직렬화하여 DTLS 채널을 통해 서버로 전송합니다. (ko)
- Serialize responses as `protocol.Response` and send them back to the server over DTLS. (en)
- 응답을 동일한 gRPC bi-di stream 상의 역방향 스트림 메시지로 직렬화하여 서버로 전송합니다. (ko)
- Serialize responses as reverse-direction stream messages on the same gRPC bi-di stream and send them back to the server. (en)
---
@@ -190,17 +192,21 @@ The HTTPS listener on the HopGate server receives the request. (en)
3. `proxy` 레이어가 도메인과 경로를 기반으로 이 요청을 처리할 클라이언트(예: client-1)와 해당 로컬 서비스(`service-a`)를 결정합니다. (ko)
The `proxy` layer decides which client (e.g., client-1) and which local service (`service-a`) should handle the request, based on domain and path. (en)
4. 서버는 요청을 `protocol.Request` 구조로 직렬화하고, `dtls.Session` 을 통해 선택된 클라이언트로 전송합니다. (ko)
The server serializes the request into a `protocol.Request` and sends it to the selected client over a `dtls.Session`. (en)
4. 서버는 요청을 `internal/protocol` 의 스트림 메시지(예: `StreamOpen` + 여러 `StreamData` + `StreamClose`)로 직렬화하고,
선택된 클라이언트와 맺은 gRPC bi-di stream 을 통해 전송합니다. (ko)
The server serializes the request into stream messages from `internal/protocol` (e.g., `StreamOpen` + multiple `StreamData` + `StreamClose`)
and sends them over a gRPC bi-di stream to the selected client. (en)
5. 클라이언트의 `proxy` 레이어는 `protocol.Request` 를 수신하고, 로컬 서비스(예: 127.0.0.1:8080)에 HTTP 요청을 수행합니다. (ko)
The clients `proxy` layer receives the `protocol.Request` and performs an HTTP request to a local service (e.g., 127.0.0.1:8080). (en)
5. 클라이언트의 `proxy` 레이어는 이 스트림 메시지들을 수신해 로컬 서비스(예: 127.0.0.1:8080)에 HTTP 요청을 수행합니다. (ko)
The clients `proxy` layer receives these stream messages and performs an HTTP request to a local service (e.g., 127.0.0.1:8080). (en)
6. 클라이언트는 로컬 서비스로부터 HTTP 응답을 수신하고, 이를 `protocol.Response` 로 직렬화하여 DTLS 채널을 통해 서버로 다시 전송합니다. (ko)
The client receives the HTTP response from the local service, serializes it as a `protocol.Response`, and sends it back to the server over DTLS. (en)
6. 클라이언트는 로컬 서비스로부터 HTTP 응답을 수신하고, 이를 역방향 스트림 메시지(`StreamOpen` + 여러 `StreamData` + `StreamClose`)로 직렬화하여
동일한 gRPC bi-di stream 을 통해 서버로 다시 전송합니다. (ko)
The client receives the HTTP response from the local service, serializes it as reverse-direction stream messages
(`StreamOpen` + multiple `StreamData` + `StreamClose`), and sends them back to the server over the same gRPC bi-di stream. (en)
7. 서버는 `protocol.Response` 를 디코딩하여 원래의 HTTPS 요청에 대한 HTTP 응답으로 변환한 뒤, 외부 사용자에게 반환합니다. (ko)
The server decodes the `protocol.Response`, converts it back into an HTTP response, and returns it to the original external user. (en)
7. 서버는 응답 스트림 메시지를 조립해 원래의 HTTPS 요청에 대한 HTTP 응답으로 변환한 뒤, 외부 사용자에게 반환합니다. (ko)
The server reassembles the response stream messages into an HTTP response for the original HTTPS request and returns it to the external user. (en)
![architecture.jpeg](images/architecture.jpeg)
@@ -217,11 +223,14 @@ The server decodes the `protocol.Response`, converts it back into an HTTP respon
- `internal/acme` 에 ACME 클라이언트(certmagic 또는 lego 등)를 연결해 TLS 인증서 발급/갱신을 구현합니다. (ko)
- Wire an ACME client (certmagic, lego, etc.) into `internal/acme` to implement TLS certificate issuance/renewal. (en)
- `internal/dtls` 에서 pion/dtls 기반 DTLS 전송 계층 및 핸드셰이크를 안정화합니다. (ko)
- Stabilize the pion/dtls-based DTLS transport and handshake logic in `internal/dtls`. (en)
- gRPC/HTTP2 기반 터널 전송 계층을 설계/구현하고, 서버/클라이언트 모두에서 장기 유지 bi-di stream 위에
HTTP 요청/응답을 멀티플렉싱하는 로직을 추가합니다. (ko)
- Design and implement a gRPC/HTTP2-based tunnel transport layer, adding logic on both server and client to multiplex HTTP requests/responses over long-lived bi-di streams. (en)
- `internal/protocol``internal/proxy` 를 통해 실제 HTTP 터널링을 구현하고, 라우팅 규칙을 구성합니다. (ko)
- Implement real HTTP tunneling and routing rules via `internal/protocol` and `internal/proxy`. (en)
- `internal/protocol``internal/proxy` 를 통해 실제 HTTP 터널링을 구현하고,
gRPC 기반 스트림 모델이 재사용할 수 있는 논리 프로토콜로 정리합니다. (ko)
- Implement real HTTP tunneling and routing rules via `internal/protocol` and `internal/proxy`,
organizing the logical protocol so that the gRPC-based stream model can reuse it. (en)
- `internal/admin` + `ent` + PostgreSQL 을 사용해 Domain 등록/해제 및 클라이언트 API Key 발급을 완성합니다. (ko)
- Complete domain registration/unregistration and client API key issuing using `internal/admin` + `ent` + PostgreSQL. (en)

View File

@@ -18,6 +18,8 @@ FROM golang:1.25-alpine AS builder
# 기본값을 지정해두면 로컬 docker build 시에도 별도 인자 없이 빌드 가능합니다.
ARG TARGETOS=linux
ARG TARGETARCH=amd64
# Git 태그/커밋 정보를 main.version 에 주입하기 위한 VERSION 인자 (기본 dev)
ARG VERSION=dev
WORKDIR /src
@@ -32,7 +34,8 @@ RUN go mod download
COPY . .
# 서버 바이너리 빌드 (멀티 아키텍처: TARGETOS/TARGETARCH 기반)
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/hop-gate-server ./cmd/server
# -ldflags 를 통해 main.version 에 VERSION 값을 주입합니다.
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -ldflags "-X main.version=${VERSION}" -o /out/hop-gate-server ./cmd/server
# ---------- Runtime stage ----------
FROM alpine:3.20
@@ -49,7 +52,7 @@ COPY --from=builder /out/hop-gate-server /app/hop-gate-server
COPY .env.example /app/.env.example
# 기본 포트 노출 (실제 포트는 .env / 설정에 따라 변경 가능)
EXPOSE 80 443/udp 443
EXPOSE 80 443
# 기본 실행 명령
ENTRYPOINT ["/app/hop-gate-server"]

View File

@@ -18,7 +18,13 @@ BIN_DIR := ./bin
SERVER_BIN := $(BIN_DIR)/hop-gate-server
CLIENT_BIN := $(BIN_DIR)/hop-gate-client
VERSION ?= $(shell git describe --tags --dirty --always 2>/dev/null || echo dev)
# VERSION 은 현재 커밋의 7글자 SHA 를 사용합니다 (예: 1a2b3c4).
# git 정보가 없으면 dev 로 fallback 합니다.
VERSION ?= $(shell git rev-parse --short=7 HEAD 2>/dev/null || echo dev)
# .env 파일 로드
include .env
export $(shell sed 's/=.*//' .env)
.PHONY: all server client clean docker-server run-server run-client errors-css
@@ -66,3 +72,34 @@ docker-server:
@echo "Building server Docker image..."
docker build -f Dockerfile.server -t hop-gate-server:$(VERSION) .
check-env-server:
@if [ -z "$$HOP_SERVER_HTTP_LISTEN" ]; then echo "필수 환경 변수 HOP_SERVER_HTTP_LISTEN이 설정되지 않았습니다."; exit 1; fi
@if [ -z "$$HOP_SERVER_HTTPS_LISTEN" ]; then echo "필수 환경 변수 HOP_SERVER_HTTPS_LISTEN가 설정되지 않았습니다."; exit 1; fi
@if [ -z "$$HOP_SERVER_DTLS_LISTEN" ]; then echo "필수 환경 변수 HOP_SERVER_DTLS_LISTEN가 설정되지 않았습니다."; exit 1; fi
@if [ -z "$$HOP_SERVER_DOMAIN" ]; then echo "필수 환경 변수 HOP_SERVER_DOMAIN가 설정되지 않았습니다."; exit 1; fi
check-env-client:
@if [ -z "$$HOP_CLIENT_SERVER_ADDR" ]; then echo "필수 환경 변수 HOP_CLIENT_SERVER_ADDR가 설정되지 않았습니다."; exit 1; fi
@if [ -z "$$HOP_CLIENT_DOMAIN" ]; then echo "필수 환경 변수 HOP_CLIENT_DOMAIN가 설정되지 않았습니다."; exit 1; fi
@if [ -z "$$HOP_CLIENT_API_KEY" ]; then echo "필수 환경 변수 HOP_CLIENT_API_KEY가 설정되지 않았습니다."; exit 1; fi
@if [ -z "$$HOP_CLIENT_LOCAL_TARGET" ]; then echo "필수 환경 변수 HOP_CLIENT_LOCAL_TARGET가 설정되지 않았습니다."; exit 1; fi
@if [ -z "$$HOP_CLIENT_DEBUG" ]; then echo "필수 환경 변수 HOP_CLIENT_DEBUG가 설정되지 않았습니다."; exit 1; fi
# --- Protobuf code generation -------------------------------------------------
# Requires:
# - protoc (https://grpc.io/docs/protoc-installation/)
# - protoc-gen-go (go install google.golang.org/protobuf/cmd/protoc-gen-go@latest)
#
# Generates Go types under internal/protocol/pb from internal/protocol/hopgate_stream.proto.
# NOTE:
# - go_package in hopgate_stream.proto is set to:
# github.com/dalbodeule/hop-gate/internal/protocol/pb;protocolpb
# - With --go_out=. (without paths=source_relative), protoc will place the
# generated file under internal/protocol/pb according to go_package.
proto:
@echo "Generating Go code from Protobuf schemas..."
protoc \
--go_out=. \
internal/protocol/hopgate_stream.proto
@echo "Protobuf generation completed."

View File

@@ -11,13 +11,16 @@ HopGate is a gateway that provides a **DTLS-based HTTP tunnel** between a public
- 서버는 80/443 포트를 점유하고, ACME(Let's Encrypt 등)로 TLS 인증서를 자동 발급/갱신합니다.
The server listens on ports 80/443 and automatically issues/renews TLS certificates via ACME (e.g. Let's Encrypt).
- 서버–클라이언트 간 전송은 DTLS 위에서 이루어지며, HTTP 요청/응답을 메시지로 터널링합니다.
Transport between server and clients uses DTLS, tunneling HTTP request/response messages.
- 서버–클라이언트 간 전송은 DTLS 위에서 이루어지며, 현재는 HTTP 요청/응답을 **Protobuf 기반 length-prefixed Envelope** 로 터널링합니다.
Transport between server and clients uses DTLS; HTTP requests/responses are tunneled as **Protobuf-based, length-prefixed envelopes**.
- 관리 Plane(REST API)을 통해 도메인 등록/해제 및 클라이언트 API Key 발급을 수행합니다.
An admin management plane (REST API) handles domain registration/unregistration and client API key issuance.
- 로그는 JSON 구조 형태로 stdout 에 출력되며, Prometheus + Loki + Grafana 스택에 친화적으로 설계되었습니다.
Logs are JSON-structured and designed to work well with a Prometheus + Loki + Grafana stack.
> 참고: 대용량 HTTP 바디에 대해서는 DTLS/UDP MTU 한계 때문에 **단일 Envelope** 로는 한계가 있으므로, `progress.md` 의 3.3A 섹션에 정리된 것처럼 `StreamOpen` / `StreamData` / `StreamClose` 기반의 스트림/프레임 터널링으로 점진적으로 전환할 예정입니다. (ko)
> Note: For very large HTTP bodies, a single-envelope model still hits DTLS/UDP MTU limits. As outlined in section 3.3A of `progress.md`, the plan is to gradually move to a stream/frame-based tunneling model using `StreamOpen` / `StreamData` / `StreamClose`. (en)
아키텍처 세부 내용은 [`ARCHITECTURE.md`](ARCHITECTURE.md)에 정리되어 있습니다.
Detailed architecture is documented in [`ARCHITECTURE.md`](ARCHITECTURE.md).
@@ -40,8 +43,8 @@ Detailed architecture is documented in [`ARCHITECTURE.md`](ARCHITECTURE.md).
- Go 1.21+ 권장 (go.mod 상 버전보다 최신 Go 사용을 추천)
Go 1.21+ is recommended (even if go.mod specifies an older minor).
- PostgreSQL (추후 DomainValidator 실제 구현 시 필요)
PostgreSQL (only required when implementing real domain validation).
- PostgreSQL (관리 Plane + 실제 DomainValidator 에 필수)
PostgreSQL (required for the admin plane and the real DomainValidator).
Go 모듈 의존성 설치 / 정리는 다음으로 수행할 수 있습니다:
You can install/cleanup Go module deps via:
@@ -71,6 +74,49 @@ Build artifacts are created as `./bin/hop-gate-server` and `./bin/hop-gate-clien
---
### 3.3 환경변수와 .env 처리 (Environment variables and .env handling)
HopGate 는 공통 설정을 [`internal/config/config.go`](internal/config/config.go) 에서 로드하며,
**운영체제 환경변수(OS env)가 `.env` 파일보다 우선**하도록 설계되어 있습니다.
HopGate loads shared configuration from [`internal/config/config.go`](internal/config/config.go) and is designed so that **OS-level environment variables take precedence over `.env`**.
- `.env` 로더: [`loadDotEnvOnce`](internal/config/config.go)
- 현재 작업 디렉터리의 `.env` 파일을 한 번만 읽습니다.
- 이미 OS 환경변수에 설정된 키는 **덮어쓰지 않고 그대로 유지**하고, 비어 있는 키에 대해서만 `.env` 값을 주입합니다.
- `.env` 파일이 존재하지 않으면 조용히 무시합니다 (에러가 아닙니다).
The loader reads the `.env` file once, **does not override existing OS env values**, and only fills missing keys. If `.env` is missing, it is silently ignored.
- 서버 설정 로더 (Server config loader): [`LoadServerConfigFromEnv`](internal/config/config.go)
- `.env` 로더를 먼저 호출한 뒤, `HOP_SERVER_*` 환경변수에서 서버 설정을 구성합니다.
- 실제 실행 시점에는 서버 엔트리포인트 [`cmd/server/main.go`](cmd/server/main.go) 에서 필수 환경변수가 모두 설정되었는지 한 번 더 검증합니다.
It calls the `.env` loader first, then builds server config from `HOP_SERVER_*` env vars, and finally the server entrypoint [`cmd/server/main.go`](cmd/server/main.go) validates required variables.
- 클라이언트 설정 로더 (Client config loader): [`LoadClientConfigFromEnv`](internal/config/config.go)
- `.env` 로더를 동일하게 사용하며, `HOP_CLIENT_*` 환경변수에서 클라이언트 설정을 구성합니다.
- 이후 CLI 인자(예: `--server-addr`, `--domain`)가 있을 경우 env 값보다 우선 적용됩니다.
The same loader is used for `HOP_CLIENT_*` env vars, and CLI flags override env values when provided.
빌드/실행 시 필수 환경변수는 다음 두 단계에서 검증됩니다.
Required environment variables are validated in two stages:
1. **빌드 단계 (Build-time) Makefile 체크 (optional guard)**
- [`Makefile`](Makefile) 에서 `.env``include` 한 뒤, `check-env-server` / `check-env-client` 타깃으로 최소한의 필수 env 를 확인합니다.
- 예) 서버 빌드 시: `make server``errors-css``check-env-server``go build` 순으로 실행됩니다.
The [`Makefile`](Makefile) includes `.env` and uses `check-env-server` / `check-env-client` targets to guard required variables before build.
2. **실행 단계 (Runtime) 엔트리포인트에서 엄격 검증 (strict runtime validation)**
- 서버: [`cmd/server/main.go`](cmd/server/main.go)
- 헬퍼 `getEnvOrPanic(logger, key)` 를 사용해 `HOP_SERVER_HTTP_LISTEN`, `HOP_SERVER_HTTPS_LISTEN`, `HOP_SERVER_DTLS_LISTEN`, `HOP_SERVER_DOMAIN`, `HOP_SERVER_DEBUG` 가 비어 있지 않은지 확인합니다.
- 누락되었거나 공백인 경우, 구조화 에러 로그(JSON)와 함께 프로세스를 종료합니다.
- 클라이언트: [`cmd/client/main.go`](cmd/client/main.go)
- `HOP_CLIENT_SERVER_ADDR`, `HOP_CLIENT_DOMAIN`, `HOP_CLIENT_API_KEY`, `HOP_CLIENT_LOCAL_TARGET`, `HOP_CLIENT_DEBUG` 를 동일한 방식으로 검증합니다.
- 두 경우 모두 `HOP_*_DEBUG` 값은 문자열 `"true"` 또는 `"false"` 만 허용합니다.
Both server and client use a helper (`getEnvOrPanic`) to enforce non-empty required env vars at startup and log structured JSON errors on failure. The debug flags must be the strings `"true"` or `"false"`.
실제 배포 환경에서는 `.env` 보다는 시스템 환경변수(Kubernetes `env`, Docker `-e`, systemd `Environment=` 등)를 사용하는 것을 권장하며,
로컬 개발에서는 `.env.example` 을 복사한 `.env` 파일을 사용해 빠르게 설정을 구성할 수 있습니다.
For production deployments, prefer OS-level env (Kubernetes `env`, Docker `-e`, systemd `Environment=`, etc.), and use a local `.env` (copied from `.env.example`) mainly for development.
## 4. DTLS 핸드셰이크 테스트 (Testing DTLS Handshake)
HopGate는 DTLS 위에서 **도메인 + 클라이언트 API Key** 기반의 애플리케이션 레벨 핸드셰이크를 수행합니다.
@@ -106,8 +152,8 @@ HOP_CLIENT_DEBUG=true
- `HOP_CLIENT_SERVER_ADDR` : DTLS 서버 주소 (예: `localhost:8443`)
DTLS server address, e.g. `localhost:8443`.
- `HOP_CLIENT_DOMAIN` / `HOP_CLIENT_API_KEY` : 관리 Plane 에서 발급받은 도메인/키 (현재는 DummyValidator 로 아무 값이나 허용)
Domain and API key issued by the admin plane (currently any values are accepted by DummyValidator).
- `HOP_CLIENT_DOMAIN` / `HOP_CLIENT_API_KEY` : 관리 Plane 에서 발급받은 도메인/키 (실제 ent + PostgreSQL 기반 DomainValidator 에 의해 검증)
Domain and API key issued by the admin plane (validated by a real ent + PostgreSQL based DomainValidator).
- `HOP_CLIENT_LOCAL_TARGET` : 실제로 HTTP 요청을 보낼 로컬 서버 주소
Local HTTP target address.
- `HOP_CLIENT_DEBUG=true` : 서버 인증서 체인 검증을 스킵(InsecureSkipVerify)하여 self-signed 인증서를 신뢰
@@ -164,8 +210,12 @@ For implementation skeleton, see [`internal/admin`](internal/admin) and [`ent/sc
- `Debug=true` 설정은 **개발/테스트 용도**입니다. self-signed 인증서 및 InsecureSkipVerify 사용은 프로덕션 환경에서 절대 사용하지 마세요.
`Debug=true` is strictly for development/testing. Do not use self-signed certs or InsecureSkipVerify in production.
- 실제 운영 시에는 ACME 기반 인증서, PostgreSQL + ent 기반 DomainValidator, Proxy 레이어 연동 등을 완성해야 합니다.
For production you must wire ACME certificates, a PostgreSQL+ent-based DomainValidator, and the proxy layer.
- 현재 버전은 ACME 기반 인증서, PostgreSQL + ent 기반 DomainValidator, Proxy 레이어가 기본적으로 연동되어 있으나,
대용량 HTTP 바디에 대해서는 JSON 단일 메시지 기반 터널링 특성상 DTLS/UDP MTU 한계에 부딪힐 수 있습니다.
스트림/프레임 기반 DTLS 터널링으로의 전환 및 하드닝 작업은 `progress.md` 에 정의된 다음 단계에 포함되어 있습니다. (ko)
The current version wires ACME certificates, a PostgreSQL+ent-based DomainValidator, and the proxy layer by default,
but for very large HTTP bodies the JSON single-message tunneling model can still hit DTLS/UDP MTU limits.
Moving to a stream/frame-based DTLS tunneling model and further hardening are tracked as next steps in `progress.md`. (en)
HopGate는 아직 초기 단계의 실험적 프로젝트입니다. API 및 동작은 언제든지 변경될 수 있습니다.
HopGate is still experimental; APIs and behavior may change at any time.

View File

@@ -1,20 +1,46 @@
package main
import (
"bytes"
"context"
"crypto/tls"
"crypto/x509"
"flag"
"fmt"
"io"
"net"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"sync"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"github.com/dalbodeule/hop-gate/internal/config"
"github.com/dalbodeule/hop-gate/internal/dtls"
"github.com/dalbodeule/hop-gate/internal/logging"
"github.com/dalbodeule/hop-gate/internal/proxy"
"github.com/dalbodeule/hop-gate/internal/protocol"
protocolpb "github.com/dalbodeule/hop-gate/internal/protocol/pb"
)
// version 은 빌드 시 -ldflags "-X main.version=xxxxxxx" 로 덮어쓰이는 필드입니다.
// 기본값 "dev" 는 로컬 개발용입니다.
var version = "dev"
func getEnvOrPanic(logger logging.Logger, key string) string {
value, exists := os.LookupEnv(key)
if !exists || strings.TrimSpace(value) == "" {
logger.Error("missing required environment variable", logging.Fields{
"env": key,
})
os.Exit(1)
}
return value
}
// maskAPIKey 는 로그에 노출할 때 클라이언트 API Key 를 일부만 보여주기 위한 헬퍼입니다.
func maskAPIKey(key string) string {
if len(key) <= 8 {
@@ -33,18 +59,644 @@ func firstNonEmpty(values ...string) string {
return ""
}
// runGRPCTunnelClient 는 gRPC 기반 터널을 사용하는 실험적 클라이언트 진입점입니다. (ko)
// runGRPCTunnelClient is an experimental entrypoint for a gRPC-based tunnel client. (en)
func runGRPCTunnelClient(ctx context.Context, logger logging.Logger, finalCfg *config.ClientConfig) error {
// TLS 설정은 기존 DTLS 클라이언트와 동일한 정책을 사용합니다. (ko)
// TLS configuration mirrors the existing DTLS client policy. (en)
var tlsCfg *tls.Config
if finalCfg.Debug {
tlsCfg = &tls.Config{
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
}
} else {
rootCAs, err := x509.SystemCertPool()
if err != nil || rootCAs == nil {
rootCAs = x509.NewCertPool()
}
tlsCfg = &tls.Config{
RootCAs: rootCAs,
MinVersion: tls.VersionTLS12,
}
}
// finalCfg.ServerAddr 가 "host:port" 형태이므로, SNI 에는 DNS(host) 부분만 넣어야 한다.
host := finalCfg.ServerAddr
if h, _, err := net.SplitHostPort(finalCfg.ServerAddr); err == nil && strings.TrimSpace(h) != "" {
host = h
}
tlsCfg.ServerName = host
creds := credentials.NewTLS(tlsCfg)
log := logger.With(logging.Fields{
"component": "grpc_tunnel_client",
"server_addr": finalCfg.ServerAddr,
"domain": finalCfg.Domain,
"local_target": finalCfg.LocalTarget,
})
log.Info("dialing grpc tunnel", nil)
conn, err := grpc.DialContext(ctx, finalCfg.ServerAddr, grpc.WithTransportCredentials(creds), grpc.WithBlock())
if err != nil {
log.Error("failed to dial grpc tunnel server", logging.Fields{
"error": err.Error(),
})
return err
}
defer conn.Close()
client := protocolpb.NewHopGateTunnelClient(conn)
stream, err := client.OpenTunnel(ctx)
if err != nil {
log.Error("failed to open grpc tunnel stream", logging.Fields{
"error": err.Error(),
})
return err
}
log.Info("grpc tunnel stream opened", nil)
// 초기 핸드셰이크: 도메인, API 키, 로컬 타깃 정보를 StreamOpen 헤더로 전송합니다. (ko)
// Initial handshake: send domain, API key, and local target via StreamOpen headers. (en)
headers := map[string]*protocolpb.HeaderValues{
"X-HopGate-Domain": {Values: []string{finalCfg.Domain}},
"X-HopGate-API-Key": {Values: []string{finalCfg.ClientAPIKey}},
"X-HopGate-Local-Target": {Values: []string{finalCfg.LocalTarget}},
}
open := &protocolpb.StreamOpen{
Id: "control-0",
ServiceName: "control",
TargetAddr: "",
Header: headers,
}
env := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: open,
},
}
if err := stream.Send(env); err != nil {
log.Error("failed to send initial stream_open handshake", logging.Fields{
"error": err.Error(),
})
return err
}
log.Info("sent initial stream_open handshake on grpc tunnel", logging.Fields{
"domain": finalCfg.Domain,
"local_target": finalCfg.LocalTarget,
"api_key_mask": maskAPIKey(finalCfg.ClientAPIKey),
})
// 로컬 HTTP 프록시용 HTTP 클라이언트 구성. (ko)
// HTTP client used to forward requests to the local target. (en)
httpClient := &http.Client{
Timeout: 30 * time.Second,
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 10 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
ForceAttemptHTTP2: true,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
},
}
// 서버→클라이언트 방향 StreamOpen/StreamData/StreamClose 를
// HTTP 요청 단위로 모으기 위한 per-stream 상태 테이블입니다. (ko)
// Per-stream state table to assemble HTTP requests from StreamOpen/Data/Close. (en)
type inboundStream struct {
open *protocolpb.StreamOpen
body bytes.Buffer
}
streams := make(map[string]*inboundStream)
var streamsMu sync.Mutex
// gRPC 스트림에 대한 Send 는 동시 호출이 안전하지 않으므로, sendMu 로 직렬화합니다. (ko)
// gRPC streaming Send is not safe for concurrent calls; protect with a mutex. (en)
var sendMu sync.Mutex
sendEnv := func(e *protocolpb.Envelope) error {
sendMu.Lock()
defer sendMu.Unlock()
return stream.Send(e)
}
// 서버에서 전달된 StreamOpen/StreamData/StreamClose 를 로컬 HTTP 요청으로 변환하고,
// 응답을 StreamOpen/StreamData/StreamClose 로 다시 서버에 전송하는 헬퍼입니다. (ko)
// handleStream forwards a single logical HTTP request to the local target
// and sends the response back as StreamOpen/StreamData/StreamClose frames. (en)
handleStream := func(so *protocolpb.StreamOpen, body []byte) {
go func() {
streamID := strings.TrimSpace(so.Id)
if streamID == "" {
log.Error("inbound stream has empty id", logging.Fields{})
return
}
if finalCfg.LocalTarget == "" {
log.Error("local target is empty; cannot forward request", logging.Fields{
"stream_id": streamID,
})
return
}
// Pseudo-headers 에서 메서드/URL/Host 추출. (ko)
// Extract method/URL/host from pseudo-headers. (en)
method := http.MethodGet
if hv, ok := so.Header[protocol.HeaderKeyMethod]; ok && hv != nil && len(hv.Values) > 0 && strings.TrimSpace(hv.Values[0]) != "" {
method = hv.Values[0]
}
urlStr := "/"
if hv, ok := so.Header[protocol.HeaderKeyURL]; ok && hv != nil && len(hv.Values) > 0 && strings.TrimSpace(hv.Values[0]) != "" {
urlStr = hv.Values[0]
}
u, err := url.Parse(urlStr)
if err != nil {
errMsg := fmt.Sprintf("parse url from stream_open: %v", err)
log.Error("failed to parse url from stream_open", logging.Fields{
"stream_id": streamID,
"error": err.Error(),
})
respHeader := map[string]*protocolpb.HeaderValues{
"Content-Type": {
Values: []string{"text/plain; charset=utf-8"},
},
protocol.HeaderKeyStatus: {
Values: []string{strconv.Itoa(http.StatusBadGateway)},
},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err2 := sendEnv(respOpen); err2 != nil {
log.Error("failed to send error stream_open from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: 0,
Data: []byte("HopGate client: " + errMsg),
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
log.Error("failed to send error stream_data from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: errMsg,
},
},
}
if err2 := sendEnv(closeEnv); err2 != nil {
log.Error("failed to send error stream_close from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
}
return
}
u.Scheme = "http"
u.Host = finalCfg.LocalTarget
// 로컬 HTTP 요청용 헤더 구성 (pseudo-headers 제거). (ko)
// Build local HTTP headers, stripping pseudo-headers. (en)
httpHeader := make(http.Header, len(so.Header))
for k, hv := range so.Header {
if k == protocol.HeaderKeyMethod ||
k == protocol.HeaderKeyURL ||
k == protocol.HeaderKeyHost ||
k == protocol.HeaderKeyStatus {
continue
}
if hv == nil {
continue
}
for _, v := range hv.Values {
httpHeader.Add(k, v)
}
}
var reqBody io.Reader
if len(body) > 0 {
reqBody = bytes.NewReader(body)
}
req, err := http.NewRequestWithContext(ctx, method, u.String(), reqBody)
if err != nil {
errMsg := fmt.Sprintf("create http request from stream: %v", err)
log.Error("failed to create local http request", logging.Fields{
"stream_id": streamID,
"error": err.Error(),
})
respHeader := map[string]*protocolpb.HeaderValues{
"Content-Type": {
Values: []string{"text/plain; charset=utf-8"},
},
protocol.HeaderKeyStatus: {
Values: []string{strconv.Itoa(http.StatusBadGateway)},
},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err2 := sendEnv(respOpen); err2 != nil {
log.Error("failed to send error stream_open from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: 0,
Data: []byte("HopGate client: " + errMsg),
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
log.Error("failed to send error stream_data from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
return
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: errMsg,
},
},
}
if err2 := sendEnv(closeEnv); err2 != nil {
log.Error("failed to send error stream_close from client", logging.Fields{
"stream_id": streamID,
"error": err2.Error(),
})
}
return
}
req.Header = httpHeader
if len(body) > 0 {
req.ContentLength = int64(len(body))
}
start := time.Now()
logReq := log.With(logging.Fields{
"component": "grpc_client_proxy",
"stream_id": streamID,
"service": so.ServiceName,
"method": method,
"url": urlStr,
"local_target": finalCfg.LocalTarget,
})
logReq.Info("forwarding stream http request to local target", nil)
res, err := httpClient.Do(req)
if err != nil {
errMsg := fmt.Sprintf("perform local http request: %v", err)
logReq.Error("local http request failed", logging.Fields{
"error": err.Error(),
})
respHeader := map[string]*protocolpb.HeaderValues{
"Content-Type": {
Values: []string{"text/plain; charset=utf-8"},
},
protocol.HeaderKeyStatus: {
Values: []string{strconv.Itoa(http.StatusBadGateway)},
},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err2 := sendEnv(respOpen); err2 != nil {
logReq.Error("failed to send error stream_open from client", logging.Fields{
"error": err2.Error(),
})
return
}
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: 0,
Data: []byte("HopGate client: " + errMsg),
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
logReq.Error("failed to send error stream_data from client", logging.Fields{
"error": err2.Error(),
})
return
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: errMsg,
},
},
}
if err2 := sendEnv(closeEnv); err2 != nil {
logReq.Error("failed to send error stream_close from client", logging.Fields{
"error": err2.Error(),
})
}
return
}
defer res.Body.Close()
// 응답 헤더 맵을 복사하고 상태 코드를 pseudo-header 로 추가합니다. (ko)
// Copy response headers and attach status code as a pseudo-header. (en)
respHeader := make(map[string]*protocolpb.HeaderValues, len(res.Header)+1)
for k, vs := range res.Header {
hv := &protocolpb.HeaderValues{
Values: append([]string(nil), vs...),
}
respHeader[k] = hv
}
statusCode := res.StatusCode
if statusCode == 0 {
statusCode = http.StatusOK
}
respHeader[protocol.HeaderKeyStatus] = &protocolpb.HeaderValues{
Values: []string{strconv.Itoa(statusCode)},
}
respOpen := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: &protocolpb.StreamOpen{
Id: streamID,
ServiceName: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: respHeader,
},
},
}
if err := sendEnv(respOpen); err != nil {
logReq.Error("failed to send stream response open envelope from client", logging.Fields{
"error": err.Error(),
})
return
}
// 응답 바디를 4KiB(StreamChunkSize) 단위로 잘라 StreamData 프레임으로 전송합니다. (ko)
// Chunk the response body into 4KiB (StreamChunkSize) StreamData frames. (en)
buf := make([]byte, protocol.StreamChunkSize)
var seq uint64
for {
n, err := res.Body.Read(buf)
if n > 0 {
dataCopy := append([]byte(nil), buf[:n]...)
dataEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: &protocolpb.StreamData{
Id: streamID,
Seq: seq,
Data: dataCopy,
},
},
}
if err2 := sendEnv(dataEnv); err2 != nil {
logReq.Error("failed to send stream response data envelope from client", logging.Fields{
"error": err2.Error(),
})
return
}
seq++
}
if err == io.EOF {
break
}
if err != nil {
logReq.Error("failed to read local http response body", logging.Fields{
"error": err.Error(),
})
break
}
}
closeEnv := &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: &protocolpb.StreamClose{
Id: streamID,
Error: "",
},
},
}
if err := sendEnv(closeEnv); err != nil {
logReq.Error("failed to send stream response close envelope from client", logging.Fields{
"error": err.Error(),
})
return
}
logReq.Info("stream http response sent from client", logging.Fields{
"status": statusCode,
"elapsed_ms": time.Since(start).Milliseconds(),
"error": "",
})
}()
}
// 수신 루프: 서버에서 들어오는 StreamOpen/StreamData/StreamClose 를
// 로컬 HTTP 요청으로 변환하고 응답을 다시 터널로 전송합니다. (ko)
// Receive loop: convert incoming StreamOpen/StreamData/StreamClose into local
// HTTP requests and send responses back over the tunnel. (en)
for {
if ctx.Err() != nil {
log.Info("context cancelled, closing grpc tunnel client", logging.Fields{
"error": ctx.Err().Error(),
})
return ctx.Err()
}
in, err := stream.Recv()
if err != nil {
if err == io.EOF {
log.Info("grpc tunnel stream closed by server", nil)
return nil
}
log.Error("grpc tunnel receive error", logging.Fields{
"error": err.Error(),
})
return err
}
payloadType := "unknown"
switch payload := in.Payload.(type) {
case *protocolpb.Envelope_HttpRequest:
payloadType = "http_request"
case *protocolpb.Envelope_HttpResponse:
payloadType = "http_response"
case *protocolpb.Envelope_StreamOpen:
payloadType = "stream_open"
so := payload.StreamOpen
if so == nil {
log.Error("received stream_open with nil payload on grpc tunnel client", logging.Fields{})
continue
}
streamID := strings.TrimSpace(so.Id)
if streamID == "" {
log.Error("received stream_open with empty stream id on grpc tunnel client", logging.Fields{})
continue
}
streamsMu.Lock()
if _, exists := streams[streamID]; exists {
log.Error("received duplicate stream_open for existing stream on grpc tunnel client", logging.Fields{
"stream_id": streamID,
})
streamsMu.Unlock()
continue
}
streams[streamID] = &inboundStream{open: so}
streamsMu.Unlock()
case *protocolpb.Envelope_StreamData:
payloadType = "stream_data"
sd := payload.StreamData
if sd == nil {
log.Error("received stream_data with nil payload on grpc tunnel client", logging.Fields{})
continue
}
streamID := strings.TrimSpace(sd.Id)
if streamID == "" {
log.Error("received stream_data with empty stream id on grpc tunnel client", logging.Fields{})
continue
}
streamsMu.Lock()
st := streams[streamID]
streamsMu.Unlock()
if st == nil {
log.Warn("received stream_data for unknown stream on grpc tunnel client", logging.Fields{
"stream_id": streamID,
})
continue
}
if len(sd.Data) > 0 {
if _, err := st.body.Write(sd.Data); err != nil {
log.Error("failed to buffer stream_data body on grpc tunnel client", logging.Fields{
"stream_id": streamID,
"error": err.Error(),
})
}
}
case *protocolpb.Envelope_StreamClose:
payloadType = "stream_close"
sc := payload.StreamClose
if sc == nil {
log.Error("received stream_close with nil payload on grpc tunnel client", logging.Fields{})
continue
}
streamID := strings.TrimSpace(sc.Id)
if streamID == "" {
log.Error("received stream_close with empty stream id on grpc tunnel client", logging.Fields{})
continue
}
streamsMu.Lock()
st := streams[streamID]
if st != nil {
delete(streams, streamID)
}
streamsMu.Unlock()
if st == nil {
log.Warn("received stream_close for unknown stream on grpc tunnel client", logging.Fields{
"stream_id": streamID,
})
continue
}
// 현재까지 수신한 메타데이터/바디를 사용해 로컬 HTTP 요청을 수행하고,
// 응답을 다시 터널로 전송합니다. (ko)
// Use the accumulated metadata/body to perform the local HTTP request and
// send the response back over the tunnel. (en)
bodyCopy := append([]byte(nil), st.body.Bytes()...)
handleStream(st.open, bodyCopy)
case *protocolpb.Envelope_StreamAck:
payloadType = "stream_ack"
// 현재 gRPC 터널에서는 StreamAck 를 사용하지 않습니다. (ko)
// StreamAck is currently unused for gRPC tunnels. (en)
default:
payloadType = fmt.Sprintf("unknown(%T)", in.Payload)
}
log.Info("received envelope on grpc tunnel client", logging.Fields{
"payload_type": payloadType,
})
}
}
func main() {
logger := logging.NewStdJSONLogger("client")
// CLI 인자 정의 (env 보다 우선 적용됨)
serverAddrFlag := flag.String("server-addr", "", "DTLS server address (host:port)")
domainFlag := flag.String("domain", "", "registered domain (e.g. api.example.com)")
apiKeyFlag := flag.String("api-key", "", "client API key for the domain (64 chars)")
localTargetFlag := flag.String("local-target", "", "local HTTP target (host:port), e.g. 127.0.0.1:8080")
flag.Parse()
// 1. 환경변수(.env 포함)에서 클라이언트 설정 로드
// internal/config 패키지가 .env 를 먼저 읽고, 이미 설정된 OS 환경변수를 우선시합니다.
envCfg, err := config.LoadClientConfigFromEnv()
if err != nil {
logger.Error("failed to load client config from env", logging.Fields{
@@ -53,6 +705,39 @@ func main() {
os.Exit(1)
}
// 2. 필수 환경 변수 유효성 검사 (.env 포함; OS 환경변수가 우선)
serverAddrEnv := getEnvOrPanic(logger, "HOP_CLIENT_SERVER_ADDR")
clientDomainEnv := getEnvOrPanic(logger, "HOP_CLIENT_DOMAIN")
apiKeyEnv := getEnvOrPanic(logger, "HOP_CLIENT_API_KEY")
localTargetEnv := getEnvOrPanic(logger, "HOP_CLIENT_LOCAL_TARGET")
debugEnv := getEnvOrPanic(logger, "HOP_CLIENT_DEBUG")
// 디버깅 플래그 형식 확인
if debugEnv != "true" && debugEnv != "false" {
logger.Error("invalid value for HOP_CLIENT_DEBUG; must be 'true' or 'false'", logging.Fields{
"env": "HOP_CLIENT_DEBUG",
"value": debugEnv,
})
os.Exit(1)
}
// 유효성 검사 결과를 구조화 로그로 출력
logger.Info("validated client env vars", logging.Fields{
"HOP_CLIENT_SERVER_ADDR": serverAddrEnv,
"HOP_CLIENT_DOMAIN": clientDomainEnv,
"HOP_CLIENT_API_KEY_MASK": maskAPIKey(apiKeyEnv),
"HOP_CLIENT_LOCAL_TARGET": localTargetEnv,
"HOP_CLIENT_DEBUG": debugEnv,
})
// CLI 인자 정의 (env 보다 우선 적용됨)
serverAddrFlag := flag.String("server-addr", "", "HopGate server address (host:port)")
domainFlag := flag.String("domain", "", "registered domain (e.g. api.example.com)")
apiKeyFlag := flag.String("api-key", "", "client API key for the domain (64 chars)")
localTargetFlag := flag.String("local-target", "", "local HTTP target (host:port), e.g. 127.0.0.1:8080")
flag.Parse()
// 2. CLI 인자 우선, env 후순위로 최종 설정 구성
finalCfg := &config.ClientConfig{
ServerAddr: firstNonEmpty(strings.TrimSpace(*serverAddrFlag), strings.TrimSpace(envCfg.ServerAddr)),
@@ -87,6 +772,7 @@ func main() {
logger.Info("hop-gate client starting", logging.Fields{
"stack": "prometheus-loki-grafana",
"version": version,
"server_addr": finalCfg.ServerAddr,
"domain": finalCfg.Domain,
"local_target": finalCfg.LocalTarget,
@@ -94,78 +780,16 @@ func main() {
"debug": finalCfg.Debug,
})
// 4. DTLS 클라이언트 연결 및 핸드셰이크
ctx := context.Background()
// 디버그 모드에서는 서버 인증서 검증을 스킵(InsecureSkipVerify=true) 하여
// self-signed 테스트 인증서도 신뢰하도록 합니다.
// 운영 환경에서는 Debug=false 로 두고, 올바른 RootCAs / ServerName 을 갖는 tls.Config 를 사용해야 합니다.
var tlsCfg *tls.Config
if finalCfg.Debug {
tlsCfg = &tls.Config{
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
}
} else {
// 운영 모드: 시스템 루트 CA + SNI(ServerName)에 서버 도메인 설정
rootCAs, err := x509.SystemCertPool()
if err != nil || rootCAs == nil {
rootCAs = x509.NewCertPool()
}
tlsCfg = &tls.Config{
RootCAs: rootCAs,
MinVersion: tls.VersionTLS12,
}
}
// DTLS 서버 측은 SNI(ServerName)가 HOP_SERVER_DOMAIN(cfg.Domain)과 일치하는지 검사하므로,
// 클라이언트 TLS 설정에도 반드시 도메인을 설정해준다.
//
// finalCfg.ServerAddr 가 "host:port" 형태이므로, SNI 에는 DNS(host) 부분만 넣어야 한다.
host := finalCfg.ServerAddr
if h, _, err := net.SplitHostPort(finalCfg.ServerAddr); err == nil && strings.TrimSpace(h) != "" {
host = h
}
tlsCfg.ServerName = host
client := dtls.NewPionClient(dtls.PionClientConfig{
Addr: finalCfg.ServerAddr,
TLSConfig: tlsCfg,
})
sess, err := client.Connect()
if err != nil {
logger.Error("failed to establish dtls session", logging.Fields{
"error": err.Error(),
})
os.Exit(1)
}
defer sess.Close()
hsRes, err := dtls.PerformClientHandshake(ctx, sess, logger, finalCfg.Domain, finalCfg.ClientAPIKey, finalCfg.LocalTarget)
if err != nil {
logger.Error("dtls handshake failed", logging.Fields{
// 현재 클라이언트는 DTLS 레이어 없이 gRPC 터널만을 사용합니다. (ko)
// The client now uses only the gRPC tunnel, without any DTLS layer. (en)
if err := runGRPCTunnelClient(ctx, logger, finalCfg); err != nil {
logger.Error("grpc tunnel client exited with error", logging.Fields{
"error": err.Error(),
})
os.Exit(1)
}
logger.Info("dtls handshake completed", logging.Fields{
"domain": hsRes.Domain,
"local_target": finalCfg.LocalTarget,
})
// 5. DTLS 세션 위에서 서버 요청을 처리하는 클라이언트 프록시 루프 시작
clientProxy := proxy.NewClientProxy(logger, finalCfg.LocalTarget)
logger.Info("starting client proxy loop", logging.Fields{
"local_target": finalCfg.LocalTarget,
})
if err := clientProxy.StartLoop(ctx, sess); err != nil {
logger.Error("client proxy loop exited with error", logging.Fields{
"error": err.Error(),
})
os.Exit(1)
}
logger.Info("client proxy loop exited normally", nil)
logger.Info("grpc tunnel client exited normally", nil)
}

File diff suppressed because it is too large Load Diff

View File

@@ -27,7 +27,6 @@ services:
# 외부 80/443 → 컨테이너 8080/8443 매핑 (예: .env.example 기준)
- "80:80" # HTTP
- "443:443" # HTTPS (TCP)
- "443:443/udp" # DTLS (UDP)
volumes:
# ACME 인증서/계정 캐시 디렉터리 (호스트에 지속 보관)

9
go.mod
View File

@@ -7,9 +7,10 @@ require (
github.com/go-acme/lego/v4 v4.28.1
github.com/google/uuid v1.6.0
github.com/lib/pq v1.10.9
github.com/pion/dtls/v3 v3.0.7
github.com/prometheus/client_golang v1.19.0
golang.org/x/net v0.47.0
google.golang.org/grpc v1.76.0
google.golang.org/protobuf v1.36.10
)
require (
@@ -19,15 +20,13 @@ require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/bmatcuk/doublestar v1.3.4 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-openapi/inflect v0.19.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/hashicorp/hcl/v2 v2.18.1 // indirect
github.com/miekg/dns v1.1.68 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/pion/logging v0.2.4 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
@@ -40,5 +39,5 @@ require (
golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/tools v0.38.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8 // indirect
)

34
go.sum
View File

@@ -14,18 +14,24 @@ github.com/bmatcuk/doublestar v1.3.4 h1:gPypJ5xD31uhX6Tf54sDPUOBXTqKH4c9aPY66CyQ
github.com/bmatcuk/doublestar v1.3.4/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-acme/lego/v4 v4.28.1 h1:zt301JYF51UIEkpSXsdeGq9hRePeFzQCq070OdAmP0Q=
github.com/go-acme/lego/v4 v4.28.1/go.mod h1:bzjilr03IgbaOwlH396hq5W56Bi0/uoRwW/JM8hP7m4=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/inflect v0.19.0 h1:9jCH9scKIbHeV9m12SmPilScz6krDxKRasNNSNPXu/4=
github.com/go-openapi/inflect v0.19.0/go.mod h1:lHpZVlpIQqLyKwJ4N+YSc9hchQy/i12fJykb83CRBH4=
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@@ -46,12 +52,6 @@ github.com/miekg/dns v1.1.68 h1:jsSRkNozw7G/mnmXULynzMNIsgY2dHC8LO6U6Ij2JEA=
github.com/miekg/dns v1.1.68/go.mod h1:fujopn7TB3Pu3JM69XaawiU0wqjpL9/8xGop5UrTPps=
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
github.com/pion/dtls/v3 v3.0.7 h1:bItXtTYYhZwkPFk4t1n3Kkf5TDrfj6+4wG+CZR8uI9Q=
github.com/pion/dtls/v3 v3.0.7/go.mod h1:uDlH5VPrgOQIw59irKYkMudSFprY9IEFCqz/eTz16f8=
github.com/pion/logging v0.2.4 h1:tTew+7cmQ+Mc1pTBLKH2puKsOvhm32dROumOZ655zB8=
github.com/pion/logging v0.2.4/go.mod h1:DffhXTKYdNZU+KtJ5pyQDjvOAh/GsNSyv1lbkFbe3so=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU=
@@ -72,6 +72,18 @@ github.com/zclconf/go-cty v1.14.4 h1:uXXczd9QDGsgu0i/QFR/hzI5NYCHLf6NQw/atrbnhq8
github.com/zclconf/go-cty v1.14.4/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
github.com/zclconf/go-cty-yaml v1.1.0 h1:nP+jp0qPHv2IhUVqmQSzjvqAWcObN0KBkUl2rWBdig0=
github.com/zclconf/go-cty-yaml v1.1.0/go.mod h1:9YLUH4g7lOhVWqUbctnVlZ5KLpg7JAprQNgxSZ1Gyxs=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc=
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
@@ -86,6 +98,12 @@ golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8 h1:M1rk8KBnUsBDg1oPGHNCxG4vc1f49epmTO7xscSajMk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.76.0 h1:UnVkv1+uMLYXoIz6o7chp59WfQUYA2ex/BXQ9rHZu7A=
google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94ULZ6c=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

Binary file not shown.

Before

Width:  |  Height:  |  Size: 817 KiB

After

Width:  |  Height:  |  Size: 2.6 MiB

View File

@@ -4,7 +4,7 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
=== High-level concept ===
- HopGate is a reverse HTTP gateway.
- A single public server terminates HTTPS and DTLS, and tunnels HTTP traffic to multiple clients.
- A single public server terminates HTTPS and exposes a tunnel endpoint (gRPC/HTTP2) to tunnel HTTP traffic to multiple clients.
- Each client runs in a private network and forwards HTTP requests to local services (127.0.0.1:PORT).
=== Main components to draw ===
@@ -19,8 +19,8 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
- Terminates TLS using ACME certificates for main and proxy domains.
b. "HTTP Listener (TCP 80)"
- Handles ACME HTTP-01 challenges and redirects HTTP to HTTPS.
c. "DTLS Listener (UDP 443 or 8443)"
- Terminates DTLS sessions from multiple clients.
c. "Tunnel Endpoint (gRPC)"
- gRPC/HTTP2 listener on the same HTTPS port (TCP 443) tunnel streams.
d. "Admin API / Management Plane"
- REST API base path: /api/v1/admin
- Endpoints:
@@ -31,8 +31,9 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
- Routes incoming HTTP(S) requests to the correct client based on domain and path.
f. "ACME Certificate Manager"
- Automatically issues and renews TLS certificates (Let's Encrypt).
g. "DTLS Session Manager"
- Manages DTLS connections and per-domain sessions with clients.
g. "Tunnel Session Manager"
- Manages tunnel connections and per-domain sessions with clients
(gRPC streams).
h. "Metrics & Logging"
- Structured JSON logs shipped to Prometheus / Loki / Grafana stack.
@@ -50,35 +51,34 @@ Please draw a clean, modern system architecture diagram for a project called "Ho
- Draw 23 separate client boxes to show that multiple clients can connect.
- Each box titled "HopGate Client".
- Inside each client box, show:
a. "DTLS Client"
- Connects to HopGate Server via DTLS.
- Performs handshake with:
- domain
- client_api_key
a. "Tunnel Client"
- gRPC client that opens a long-lived bi-directional gRPC stream over HTTPS (HTTP/2).
b. "Client Proxy"
- Receives HTTP requests from the server over DTLS.
- Receives HTTP request frames from the server over the tunnel (gRPC stream).
- Forwards them to local services such as:
- 127.0.0.1:8080 (web)
- 127.0.0.1:9000 (admin)
c. "Local Services"
- A small group of boxes representing local HTTP servers.
=== Flows to highlight ===
1) User HTTP Flow
- External user -> HTTPS Listener -> Reverse Proxy Core -> DTLS Session Manager -> Specific HopGate Client -> Local Service -> back through same path to the user.
- External user -> HTTPS Listener -> Reverse Proxy Core ->
gRPC Tunnel Endpoint -> specific HopGate Client (gRPC stream) -> Local Service ->
back through same path to the user.
2) Admin Flow
- Administrator -> Admin API (with Bearer admin key) -> PostgreSQL + ent ORM:
- Register domain + memo -> returns client_api_key.
- Unregister domain + client_api_key.
3) DTLS Handshake Flow
- From client to server over DTLS:
- Client sends {domain, client_api_key}.
- Server validates against PostgreSQL Domain table.
- On success, both sides log:
- server: which domain is bound to the session.
- client: success message, bound domain, and local_target (local service address).
3) Tunnel Handshake / Session Establishment Flow
- v1: DTLS Handshake Flow (legacy) - (REMOVED)
- v2: gRPC Tunnel Establishment Flow:
- From client to server over HTTPS (HTTP/2):
- Client opens a long-lived bi-directional gRPC stream (e.g. OpenTunnel).
- First frame includes {domain, client_api_key} and client metadata.
- Server validates against PostgreSQL Domain table and associates the gRPC stream with that domain.
- Subsequent frames carry HTTP request/response metadata and body chunks.
=== Visual style ===
- Clean flat design, no 3D.

View File

@@ -107,9 +107,13 @@ func loadDotEnvOnce() {
val = strings.Trim(val, `"'`)
if key != "" {
// 이미 OS 환경변수에 설정된 값이 있는 경우 이를 우선시하고,
// 비어 있는 키에 대해서만 .env 값을 주입합니다.
if _, exists := os.LookupEnv(key); !exists {
_ = os.Setenv(key, val)
}
}
}
if err := scanner.Err(); err != nil {
dotenvErr = err
return
@@ -209,7 +213,8 @@ func loadLoggingFromEnv() LoggingConfig {
}
}
// LoadServerConfigFromEnv 는 .env 를 우선 읽고, 이후 환경 변수를 기반으로 서버 설정을 구성합니다.
// LoadServerConfigFromEnv 는 .env 를 한 번 읽어 현재 환경변수를 보완한 뒤
// "환경변수 > .env" 우선순위로 서버 설정을 구성합니다.
func LoadServerConfigFromEnv() (*ServerConfig, error) {
loadDotEnvOnce()
if dotenvErr != nil {
@@ -228,7 +233,8 @@ func LoadServerConfigFromEnv() (*ServerConfig, error) {
return cfg, nil
}
// LoadClientConfigFromEnv 는 .env 를 우선 읽고, 이후 환경 변수를 기반으로 클라이언트 설정을 구성합니다.
// LoadClientConfigFromEnv 는 .env 를 한 번 읽어 현재 환경변수를 보완한 뒤
// "환경변수 > .env" 우선순위로 클라이언트 설정을 구성합니다.
// 실제 런타임에서 사용되는 필드는 ServerAddr, Domain, ClientAPIKey, LocalTarget 입니다.
func LoadClientConfigFromEnv() (*ClientConfig, error) {
loadDotEnvOnce()

View File

@@ -1,218 +1,58 @@
package dtls
import (
"context"
"crypto/tls"
"fmt"
"net"
"time"
piondtls "github.com/pion/dtls/v3"
)
// pionSession 은 pion/dtls.Conn 을 감싸 Session 인터페이스를 구현합니다.
type pionSession struct {
conn *piondtls.Conn
id string
}
func (s *pionSession) Read(b []byte) (int, error) { return s.conn.Read(b) }
func (s *pionSession) Write(b []byte) (int, error) { return s.conn.Write(b) }
func (s *pionSession) Close() error { return s.conn.Close() }
func (s *pionSession) ID() string { return s.id }
// pionServer 는 pion/dtls 기반 Server 구현입니다.
type pionServer struct {
listener net.Listener
}
// PionServerConfig 는 DTLS 서버 리스너 구성을 정의합니다.
// PionServerConfig 는 DTLS 서버 리스너 구성을 정의하는 기존 구조체를 그대로 유지합니다. (ko)
// PionServerConfig keeps the old DTLS server listener configuration shape for compatibility. (en)
type PionServerConfig struct {
// Addr 는 "0.0.0.0:443" 와 같은 UDP 리스닝 주소입니다.
Addr string
// TLSConfig 는 ACME 등을 통해 준비된 tls.Config 입니다.
// Certificates, RootCAs, ClientAuth 등의 설정이 여기서 넘어옵니다.
// nil 인 경우 기본 빈 tls.Config 가 사용됩니다.
TLSConfig *tls.Config
}
// NewPionServer 는 pion/dtls 기반 DTLS 서버를 생성합니다.
// 내부적으로 udp 리스너를 열고, DTLS 핸드셰이크를 수행할 준비를 합니다.
func NewPionServer(cfg PionServerConfig) (Server, error) {
if cfg.Addr == "" {
return nil, fmt.Errorf("PionServerConfig.Addr is required")
}
if cfg.TLSConfig == nil {
cfg.TLSConfig = &tls.Config{
MinVersion: tls.VersionTLS12,
}
}
udpAddr, err := net.ResolveUDPAddr("udp", cfg.Addr)
if err != nil {
return nil, fmt.Errorf("resolve udp addr: %w", err)
}
// tls.Config.GetCertificate (crypto/tls) → pion/dtls.GetCertificate 어댑터
var getCert func(*piondtls.ClientHelloInfo) (*tls.Certificate, error)
if cfg.TLSConfig.GetCertificate != nil {
tlsGetCert := cfg.TLSConfig.GetCertificate
getCert = func(chi *piondtls.ClientHelloInfo) (*tls.Certificate, error) {
if chi == nil {
return tlsGetCert(&tls.ClientHelloInfo{})
}
// ACME 매니저는 주로 SNI(ServerName)에 기반해 인증서를 선택하므로,
// 필요한 최소 필드만 복사해서 전달한다.
return tlsGetCert(&tls.ClientHelloInfo{
ServerName: chi.ServerName,
})
}
}
dtlsCfg := &piondtls.Config{
// 서버가 사용할 인증서 설정: 정적 Certificates + GetCertificate 어댑터
Certificates: cfg.TLSConfig.Certificates,
GetCertificate: getCert,
InsecureSkipVerify: cfg.TLSConfig.InsecureSkipVerify,
ClientAuth: piondtls.ClientAuthType(cfg.TLSConfig.ClientAuth),
ClientCAs: cfg.TLSConfig.ClientCAs,
RootCAs: cfg.TLSConfig.RootCAs,
ServerName: cfg.TLSConfig.ServerName,
// 필요 시 ExtendedMasterSecret 등을 추가 설정
}
l, err := piondtls.Listen("udp", udpAddr, dtlsCfg)
if err != nil {
return nil, fmt.Errorf("dtls listen: %w", err)
}
return &pionServer{
listener: l,
}, nil
}
// Accept 는 새로운 DTLS 연결을 수락하고, Session 으로 래핑합니다.
func (s *pionServer) Accept() (Session, error) {
conn, err := s.listener.Accept()
if err != nil {
return nil, err
}
dtlsConn, ok := conn.(*piondtls.Conn)
if !ok {
_ = conn.Close()
return nil, fmt.Errorf("accepted connection is not *dtls.Conn")
}
id := ""
if ra := dtlsConn.RemoteAddr(); ra != nil {
id = ra.String()
}
return &pionSession{
conn: dtlsConn,
id: id,
}, nil
}
// Close 는 DTLS 리스너를 종료합니다.
func (s *pionServer) Close() error {
return s.listener.Close()
}
// pionClient 는 pion/dtls 기반 Client 구현입니다.
type pionClient struct {
addr string
tlsConfig *tls.Config
timeout time.Duration
}
// PionClientConfig 는 DTLS 클라이언트 구성을 정의합니다.
// PionClientConfig 는 DTLS 클라이언트 구성을 정의하는 기존 구조체를 그대로 유지합니다. (ko)
// PionClientConfig keeps the old DTLS client configuration shape for compatibility. (en)
type PionClientConfig struct {
// Addr 는 서버의 UDP 주소 (예: "example.com:443") 입니다.
Addr string
// TLSConfig 는 서버 인증에 사용할 tls.Config 입니다.
// InsecureSkipVerify=true 로 두면 서버 인증을 건너뛰므로 개발/테스트에만 사용해야 합니다.
TLSConfig *tls.Config
// Timeout 은 DTLS 핸드셰이크 타임아웃입니다.
// 0 이면 기본값 10초가 사용됩니다.
Timeout time.Duration
}
// NewPionClient 는 pion/dtls 기반 DTLS 클라이언트를 생성합니다.
func NewPionClient(cfg PionClientConfig) Client {
if cfg.Timeout == 0 {
cfg.Timeout = 10 * time.Second
}
if cfg.TLSConfig == nil {
// 기본값: 인증서 검증을 수행하는 안전한 설정(루트 CA 체인은 시스템 기본값 사용).
// 디버그 모드에서 인증서 검증을 스킵하고 싶다면, 호출 측에서
// TLSConfig: &tls.Config{InsecureSkipVerify: true} 를 명시적으로 전달해야 합니다.
cfg.TLSConfig = &tls.Config{
MinVersion: tls.VersionTLS12,
}
}
return &pionClient{
addr: cfg.Addr,
tlsConfig: cfg.TLSConfig,
timeout: cfg.Timeout,
}
// disabledServer 는 DTLS 전송이 비활성화되었음을 나타내는 더미 구현입니다. (ko)
// disabledServer is a dummy Server implementation indicating that DTLS transport is disabled. (en)
type disabledServer struct{}
func (s *disabledServer) Accept() (Session, error) {
return nil, fmt.Errorf("dtls transport is disabled; use gRPC tunnel instead")
}
// Connect 는 서버와 DTLS 핸드셰이크를 수행하고 Session 을 반환합니다.
func (c *pionClient) Connect() (Session, error) {
if c.addr == "" {
return nil, fmt.Errorf("PionClientConfig.Addr is required")
}
ctx, cancel := context.WithTimeout(context.Background(), c.timeout)
defer cancel()
raddr, err := net.ResolveUDPAddr("udp", c.addr)
if err != nil {
return nil, fmt.Errorf("resolve udp addr: %w", err)
}
dtlsCfg := &piondtls.Config{
// 클라이언트는 서버 인증을 위해 RootCAs/ServerName 만 사용.
// (현재는 클라이언트 인증서 사용 계획이 없으므로 GetCertificate 는 전달하지 않는다.)
Certificates: c.tlsConfig.Certificates,
InsecureSkipVerify: c.tlsConfig.InsecureSkipVerify,
RootCAs: c.tlsConfig.RootCAs,
ServerName: c.tlsConfig.ServerName,
}
type result struct {
conn *piondtls.Conn
err error
}
ch := make(chan result, 1)
go func() {
conn, err := piondtls.Dial("udp", raddr, dtlsCfg)
ch <- result{conn: conn, err: err}
}()
select {
case <-ctx.Done():
return nil, fmt.Errorf("dtls dial timeout: %w", ctx.Err())
case res := <-ch:
if res.err != nil {
return nil, fmt.Errorf("dtls dial: %w", res.err)
}
id := ""
if ra := res.conn.RemoteAddr(); ra != nil {
id = ra.String()
}
return &pionSession{
conn: res.conn,
id: id,
}, nil
}
}
// Close 는 클라이언트 단에서 유지하는 리소스가 없으므로 no-op 입니다.
func (c *pionClient) Close() error {
func (s *disabledServer) Close() error {
return nil
}
// disabledClient 는 DTLS 전송이 비활성화되었음을 나타내는 더미 구현입니다. (ko)
// disabledClient is a dummy Client implementation indicating that DTLS transport is disabled. (en)
type disabledClient struct{}
func (c *disabledClient) Connect() (Session, error) {
return nil, fmt.Errorf("dtls transport is disabled; use gRPC tunnel instead")
}
func (c *disabledClient) Close() error {
return nil
}
// NewPionServer 는 더 이상 실제 DTLS 서버를 생성하지 않고, 항상 에러를 반환합니다. (ko)
// NewPionServer no longer creates a real DTLS server and always returns an error. (en)
func NewPionServer(cfg PionServerConfig) (Server, error) {
return nil, fmt.Errorf("dtls transport is disabled; NewPionServer is no longer supported")
}
// NewPionClient 는 더 이상 실제 DTLS 클라이언트를 생성하지 않고, disabledClient 를 반환합니다. (ko)
// NewPionClient no longer creates a real DTLS client and instead returns a disabledClient. (en)
func NewPionClient(cfg PionClientConfig) Client {
return &disabledClient{}
}

View File

@@ -1,2 +1,2 @@
/*! tailwindcss v4.1.17 | MIT License | https://tailwindcss.com */
@layer properties{@supports (((-webkit-hyphens:none)) and (not (margin-trim:inline))) or ((-moz-orient:inline) and (not (color:rgb(from red r g b)))){*,:before,:after,::backdrop{--tw-tracking:initial;--tw-blur:initial;--tw-brightness:initial;--tw-contrast:initial;--tw-grayscale:initial;--tw-hue-rotate:initial;--tw-invert:initial;--tw-opacity:initial;--tw-saturate:initial;--tw-sepia:initial;--tw-drop-shadow:initial;--tw-drop-shadow-color:initial;--tw-drop-shadow-alpha:100%;--tw-drop-shadow-size:initial}}}.visible{visibility:visible}.absolute{position:absolute}.fixed{position:fixed}.static{position:static}.contents{display:contents}.flex{display:flex}.inline-flex{display:inline-flex}.table{display:table}.min-h-screen{min-height:100vh}.w-\[240px\]{width:240px}.w-full{width:100%}.flex-col{flex-direction:column}.items-baseline{align-items:baseline}.items-center{align-items:center}.justify-center{justify-content:center}.text-center{text-align:center}.tracking-\[0\.25em\]{--tw-tracking:.25em;letter-spacing:.25em}.uppercase{text-transform:uppercase}.opacity-90{opacity:.9}.filter{filter:var(--tw-blur,)var(--tw-brightness,)var(--tw-contrast,)var(--tw-grayscale,)var(--tw-hue-rotate,)var(--tw-invert,)var(--tw-saturate,)var(--tw-sepia,)var(--tw-drop-shadow,)}@property --tw-tracking{syntax:"*";inherits:false}@property --tw-blur{syntax:"*";inherits:false}@property --tw-brightness{syntax:"*";inherits:false}@property --tw-contrast{syntax:"*";inherits:false}@property --tw-grayscale{syntax:"*";inherits:false}@property --tw-hue-rotate{syntax:"*";inherits:false}@property --tw-invert{syntax:"*";inherits:false}@property --tw-opacity{syntax:"*";inherits:false}@property --tw-saturate{syntax:"*";inherits:false}@property --tw-sepia{syntax:"*";inherits:false}@property --tw-drop-shadow{syntax:"*";inherits:false}@property --tw-drop-shadow-color{syntax:"*";inherits:false}@property --tw-drop-shadow-alpha{syntax:"<percentage>";inherits:false;initial-value:100%}@property --tw-drop-shadow-size{syntax:"*";inherits:false}
@layer properties{@supports (((-webkit-hyphens:none)) and (not (margin-trim:inline))) or ((-moz-orient:inline) and (not (color:rgb(from red r g b)))){*,:before,:after,::backdrop{--tw-tracking:initial;--tw-blur:initial;--tw-brightness:initial;--tw-contrast:initial;--tw-grayscale:initial;--tw-hue-rotate:initial;--tw-invert:initial;--tw-opacity:initial;--tw-saturate:initial;--tw-sepia:initial;--tw-drop-shadow:initial;--tw-drop-shadow-color:initial;--tw-drop-shadow-alpha:100%;--tw-drop-shadow-size:initial}}}.visible{visibility:visible}.absolute{position:absolute}.fixed{position:fixed}.static{position:static}.container{width:100%}.contents{display:contents}.flex{display:flex}.inline-flex{display:inline-flex}.table{display:table}.min-h-screen{min-height:100vh}.w-\[240px\]{width:240px}.w-full{width:100%}.flex-col{flex-direction:column}.items-baseline{align-items:baseline}.items-center{align-items:center}.justify-center{justify-content:center}.text-center{text-align:center}.tracking-\[0\.25em\]{--tw-tracking:.25em;letter-spacing:.25em}.uppercase{text-transform:uppercase}.opacity-90{opacity:.9}.filter{filter:var(--tw-blur,)var(--tw-brightness,)var(--tw-contrast,)var(--tw-grayscale,)var(--tw-hue-rotate,)var(--tw-invert,)var(--tw-saturate,)var(--tw-sepia,)var(--tw-drop-shadow,)}@property --tw-tracking{syntax:"*";inherits:false}@property --tw-blur{syntax:"*";inherits:false}@property --tw-brightness{syntax:"*";inherits:false}@property --tw-contrast{syntax:"*";inherits:false}@property --tw-grayscale{syntax:"*";inherits:false}@property --tw-hue-rotate{syntax:"*";inherits:false}@property --tw-invert{syntax:"*";inherits:false}@property --tw-opacity{syntax:"*";inherits:false}@property --tw-saturate{syntax:"*";inherits:false}@property --tw-sepia{syntax:"*";inherits:false}@property --tw-drop-shadow{syntax:"*";inherits:false}@property --tw-drop-shadow-color{syntax:"*";inherits:false}@property --tw-drop-shadow-alpha{syntax:"<percentage>";inherits:false;initial-value:100%}@property --tw-drop-shadow-size{syntax:"*";inherits:false}

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

View File

@@ -6,6 +6,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Tailwind CSS is served separately from /__hopgate_assets__/errors.css -->
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>404 Not Found - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>500 Internal Server Error - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -0,0 +1,33 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>502 Bad Gateway - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">
<div class="items-center justify-center gap-3 mb-8 flex flex-col">
<img src="/__hopgate_assets__/hop-gate.png" alt="HopGate" class="h-8 w-[240px] opacity-90" />
<h2 class="text-md font-medium tracking-[0.25em] uppercase text-slate-400">HopGate</h2>
</div>
<div class="inline-flex items-baseline gap-4 mb-4">
<span class="text-6xl md:text-7xl font-extrabold tracking-[0.25em] text-amber-200">502</span>
<span class="text-lg md:text-xl font-semibold text-slate-100">Bad Gateway</span>
</div>
<p class="text-sm md:text-base text-slate-300 leading-relaxed">
HopGate could not get a valid response from the backend service.<br>
HopGate가 백엔드 서비스로부터 유효한 응답을 받지 못했습니다.
</p>
<div class="mt-8 text-xs md:text-sm text-slate-500">
This may happen when the origin is down, misconfigured, or responding with invalid data.<br>
원본 서버가 다운되었거나 설정이 잘못되었거나, 잘못된 응답을 보내는 경우 발생할 수 있습니다.
</div>
</div>
</body>
</html>

View File

@@ -5,6 +5,7 @@
<title>504 Gateway Timeout - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

View File

@@ -5,6 +5,7 @@
<title>525 TLS Handshake Failed - HopGate</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="/__hopgate_assets__/errors.css">
<link rel="icon" href="/__hopgate_assets__/favicon.ico">
</head>
<body class="min-h-screen bg-slate-950 text-slate-50 flex items-center justify-center px-4">
<div class="w-full max-w-xl text-center">

427
internal/protocol/codec.go Normal file
View File

@@ -0,0 +1,427 @@
package protocol
import (
"bufio"
"encoding/binary"
"encoding/json"
"fmt"
"io"
protocolpb "github.com/dalbodeule/hop-gate/internal/protocol/pb"
"google.golang.org/protobuf/proto"
)
// defaultDecoderBufferSize 는 pion/dtls 가 복호화한 애플리케이션 데이터를
// JSON 디코더가 안전하게 처리할 수 있도록 사용하는 버퍼 크기입니다.
// This matches existing 64KiB readers used around DTLS sessions (used by the JSON codec).
const defaultDecoderBufferSize = 64 * 1024
// dtlsReadBufferSize 는 pion/dtls 내부 버퍼 한계에 맞춘 읽기 버퍼 크기입니다.
// pion/dtls 의 UnpackDatagram 함수는 8KB (8,192 bytes) 의 기본 수신 버퍼를 사용합니다.
// DTLS는 UDP 기반이므로 한 번의 Read()에서 전체 datagram을 읽어야 하며,
// 이 크기를 초과하는 DTLS 레코드는 처리되지 않습니다.
// dtlsReadBufferSize matches the pion/dtls internal buffer limit.
// pion/dtls's UnpackDatagram function uses an 8KB (8,192 bytes) receive buffer.
// Since DTLS is UDP-based, the entire datagram must be read in a single Read() call,
// and DTLS records exceeding this size cannot be processed.
const dtlsReadBufferSize = 8 * 1024 // 8KB
// maxProtoEnvelopeBytes 는 단일 Protobuf Envelope 의 최대 크기에 대한 보수적 상한입니다.
// 아직 하드 리미트로 사용하지는 않지만, 향후 방어적 체크에 사용할 수 있습니다.
const maxProtoEnvelopeBytes = 512 * 1024 // 512KiB, 충분히 여유 있는 값
// WireCodec 는 protocol.Envelope 의 직렬화/역직렬화를 추상화합니다.
// JSON, Protobuf, length-prefixed binary 등으로 교체할 때 이 인터페이스만 유지하면 됩니다.
type WireCodec interface {
Encode(w io.Writer, env *Envelope) error
Decode(r io.Reader, env *Envelope) error
}
// jsonCodec 은 JSON 기반 WireCodec 구현입니다.
// JSON 직렬화를 계속 사용하고 싶을 때를 위해 남겨둡니다.
type jsonCodec struct{}
// Encode 는 Envelope 를 JSON 으로 인코딩해 작성합니다.
// Encode encodes an Envelope as JSON to the given writer.
func (jsonCodec) Encode(w io.Writer, env *Envelope) error {
enc := json.NewEncoder(w)
return enc.Encode(env)
}
// Decode 는 DTLS 세션에서 읽은 데이터를 JSON Envelope 로 디코딩합니다.
// pion/dtls 의 버퍼 특성 때문에, 충분히 큰 bufio.Reader 로 감싸서 사용합니다.
// Decode decodes an Envelope from JSON using a buffered reader on top of the DTLS session.
func (jsonCodec) Decode(r io.Reader, env *Envelope) error {
dec := json.NewDecoder(bufio.NewReaderSize(r, defaultDecoderBufferSize))
return dec.Decode(env)
}
// protobufCodec 은 Protobuf length-prefix framing 기반 WireCodec 구현입니다.
// 한 Envelope 당 [4바이트 big-endian 길이] [protobuf bytes] 형태로 인코딩합니다.
type protobufCodec struct{}
// Encode 는 Envelope 를 Protobuf Envelope 로 변환한 뒤, length-prefix 프레이밍으로 기록합니다.
// DTLS는 UDP 기반이므로, length prefix와 protobuf 데이터를 단일 버퍼로 합쳐 하나의 Write로 전송합니다.
// Encode encodes an Envelope as a length-prefixed protobuf message.
// For DTLS (UDP-based), we combine the length prefix and protobuf data into a single buffer
// and send it with a single Write call to preserve message boundaries.
func (protobufCodec) Encode(w io.Writer, env *Envelope) error {
pbEnv, err := toProtoEnvelope(env)
if err != nil {
return err
}
// Body/stream payload 하드 리밋: 4KiB (StreamChunkSize).
// HTTP 단일 Envelope 및 스트림 기반 프레임 모두에서 payload 가 이 값을 넘지 않도록 강제합니다.
// Enforce a 4KiB hard limit (StreamChunkSize) for HTTP bodies and stream payloads.
switch env.Type {
case MessageTypeHTTP:
if env.HTTPRequest != nil && len(env.HTTPRequest.Body) > int(StreamChunkSize) {
return fmt.Errorf("protobuf codec: http request body too large: %d bytes (max %d)", len(env.HTTPRequest.Body), StreamChunkSize)
}
if env.HTTPResponse != nil && len(env.HTTPResponse.Body) > int(StreamChunkSize) {
return fmt.Errorf("protobuf codec: http response body too large: %d bytes (max %d)", len(env.HTTPResponse.Body), StreamChunkSize)
}
case MessageTypeStreamData:
if env.StreamData != nil && len(env.StreamData.Data) > int(StreamChunkSize) {
return fmt.Errorf("protobuf codec: stream data payload too large: %d bytes (max %d)", len(env.StreamData.Data), StreamChunkSize)
}
}
data, err := proto.Marshal(pbEnv)
if err != nil {
return fmt.Errorf("protobuf marshal envelope: %w", err)
}
if len(data) == 0 {
return fmt.Errorf("protobuf codec: empty marshaled envelope")
}
if len(data) > int(^uint32(0)) {
return fmt.Errorf("protobuf codec: envelope too large: %d bytes", len(data))
}
// DTLS 환경에서는 length prefix와 protobuf 데이터를 단일 버퍼로 합쳐서 하나의 Write로 전송
// For DTLS, combine length prefix and protobuf data into a single buffer
frame := make([]byte, 4+len(data))
binary.BigEndian.PutUint32(frame[:4], uint32(len(data)))
copy(frame[4:], data)
if _, err := w.Write(frame); err != nil {
return fmt.Errorf("protobuf codec: write frame: %w", err)
}
return nil
}
// Decode 는 length-prefix 프레임에서 Protobuf Envelope 를 읽어들여
// 내부 Envelope 구조체로 변환합니다.
// DTLS는 UDP 기반이므로, 한 번의 Read로 전체 데이터그램을 읽습니다.
// Decode reads a length-prefixed protobuf Envelope and converts it into the internal Envelope.
// For DTLS (UDP-based), we read the entire datagram in a single Read call.
func (protobufCodec) Decode(r io.Reader, env *Envelope) error {
// 1) 길이 prefix 4바이트를 정확히 읽는다.
header := make([]byte, 4)
if _, err := io.ReadFull(r, header); err != nil {
return fmt.Errorf("protobuf codec: read length prefix: %w", err)
}
length := binary.BigEndian.Uint32(header)
if length == 0 {
return fmt.Errorf("protobuf codec: zero-length envelope")
}
if length > maxProtoEnvelopeBytes {
return fmt.Errorf("protobuf codec: envelope too large: %d bytes (max %d)", length, maxProtoEnvelopeBytes)
}
// 2) payload 를 length 바이트만큼 정확히 읽는다.
payload := make([]byte, int(length))
if _, err := io.ReadFull(r, payload); err != nil {
return fmt.Errorf("protobuf codec: read payload: %w", err)
}
var pbEnv protocolpb.Envelope
if err := proto.Unmarshal(payload, &pbEnv); err != nil {
return fmt.Errorf("protobuf codec: unmarshal envelope: %w", err)
}
return fromProtoEnvelope(&pbEnv, env)
}
// DefaultCodec 은 현재 런타임에서 사용하는 기본 WireCodec 입니다.
// 현재는 Protobuf length-prefix 기반 codec 을 기본으로 사용합니다.
// 서버와 클라이언트가 모두 이 버전을 사용해야 wire-format 이 일치합니다.
var DefaultCodec WireCodec = protobufCodec{}
// GetDTLSReadBufferSize 는 DTLS 세션 읽기에 사용할 버퍼 크기를 반환합니다.
// 이 값은 pion/dtls 내부 버퍼 한계(8KB)에 맞춰져 있습니다.
// GetDTLSReadBufferSize returns the buffer size to use for reading from DTLS sessions.
// This value is aligned with pion/dtls's internal buffer limit (8KB).
func GetDTLSReadBufferSize() int {
return dtlsReadBufferSize
}
// toProtoEnvelope 는 내부 Envelope 구조체를 Protobuf Envelope 로 변환합니다.
// 현재 구현은 HTTP 요청/응답 및 스트림 관련 타입(StreamOpen/StreamData/StreamClose/StreamAck)을 지원합니다.
func toProtoEnvelope(env *Envelope) (*protocolpb.Envelope, error) {
switch env.Type {
case MessageTypeHTTP:
if env.HTTPRequest != nil {
req := env.HTTPRequest
pbReq := &protocolpb.Request{
RequestId: req.RequestID,
ClientId: req.ClientID,
ServiceName: req.ServiceName,
Method: req.Method,
Url: req.URL,
Header: make(map[string]*protocolpb.HeaderValues, len(req.Header)),
Body: req.Body,
}
for k, vs := range req.Header {
hv := &protocolpb.HeaderValues{
Values: append([]string(nil), vs...),
}
pbReq.Header[k] = hv
}
return &protocolpb.Envelope{
Payload: &protocolpb.Envelope_HttpRequest{
HttpRequest: pbReq,
},
}, nil
}
if env.HTTPResponse != nil {
resp := env.HTTPResponse
pbResp := &protocolpb.Response{
RequestId: resp.RequestID,
Status: int32(resp.Status),
Header: make(map[string]*protocolpb.HeaderValues, len(resp.Header)),
Body: resp.Body,
Error: resp.Error,
}
for k, vs := range resp.Header {
hv := &protocolpb.HeaderValues{
Values: append([]string(nil), vs...),
}
pbResp.Header[k] = hv
}
return &protocolpb.Envelope{
Payload: &protocolpb.Envelope_HttpResponse{
HttpResponse: pbResp,
},
}, nil
}
return nil, fmt.Errorf("protobuf codec: http envelope has neither request nor response")
case MessageTypeStreamOpen:
if env.StreamOpen == nil {
return nil, fmt.Errorf("protobuf codec: stream_open envelope missing payload")
}
so := env.StreamOpen
pbSO := &protocolpb.StreamOpen{
Id: string(so.ID),
ServiceName: so.Service,
TargetAddr: so.TargetAddr,
Header: make(map[string]*protocolpb.HeaderValues, len(so.Header)),
}
for k, vs := range so.Header {
hv := &protocolpb.HeaderValues{
Values: append([]string(nil), vs...),
}
pbSO.Header[k] = hv
}
return &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamOpen{
StreamOpen: pbSO,
},
}, nil
case MessageTypeStreamData:
if env.StreamData == nil {
return nil, fmt.Errorf("protobuf codec: stream_data envelope missing payload")
}
sd := env.StreamData
pbSD := &protocolpb.StreamData{
Id: string(sd.ID),
Seq: sd.Seq,
Data: sd.Data,
}
return &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamData{
StreamData: pbSD,
},
}, nil
case MessageTypeStreamClose:
if env.StreamClose == nil {
return nil, fmt.Errorf("protobuf codec: stream_close envelope missing payload")
}
sc := env.StreamClose
pbSC := &protocolpb.StreamClose{
Id: string(sc.ID),
Error: sc.Error,
}
return &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamClose{
StreamClose: pbSC,
},
}, nil
case MessageTypeStreamAck:
if env.StreamAck == nil {
return nil, fmt.Errorf("protobuf codec: stream_ack envelope missing payload")
}
sa := env.StreamAck
pbSA := &protocolpb.StreamAck{
Id: string(sa.ID),
AckSeq: sa.AckSeq,
LostSeqs: append([]uint64(nil), sa.LostSeqs...),
WindowSize: sa.WindowSize,
}
return &protocolpb.Envelope{
Payload: &protocolpb.Envelope_StreamAck{
StreamAck: pbSA,
},
}, nil
default:
return nil, fmt.Errorf("protobuf codec: unsupported envelope type %q", env.Type)
}
}
// fromProtoEnvelope 는 Protobuf Envelope 를 내부 Envelope 구조체로 변환합니다.
// 현재 구현은 HTTP 요청/응답 및 스트림 관련 타입(StreamOpen/StreamData/StreamClose/StreamAck)을 지원합니다.
func fromProtoEnvelope(pbEnv *protocolpb.Envelope, env *Envelope) error {
switch payload := pbEnv.Payload.(type) {
case *protocolpb.Envelope_HttpRequest:
req := payload.HttpRequest
if req == nil {
return fmt.Errorf("protobuf codec: http_request payload is nil")
}
hdr := make(map[string][]string, len(req.Header))
for k, hv := range req.Header {
if hv == nil {
continue
}
hdr[k] = append([]string(nil), hv.Values...)
}
env.Type = MessageTypeHTTP
env.HTTPRequest = &Request{
RequestID: req.RequestId,
ClientID: req.ClientId,
ServiceName: req.ServiceName,
Method: req.Method,
URL: req.Url,
Header: hdr,
Body: append([]byte(nil), req.Body...),
}
env.HTTPResponse = nil
env.StreamOpen = nil
env.StreamData = nil
env.StreamClose = nil
env.StreamAck = nil
return nil
case *protocolpb.Envelope_HttpResponse:
resp := payload.HttpResponse
if resp == nil {
return fmt.Errorf("protobuf codec: http_response payload is nil")
}
hdr := make(map[string][]string, len(resp.Header))
for k, hv := range resp.Header {
if hv == nil {
continue
}
hdr[k] = append([]string(nil), hv.Values...)
}
env.Type = MessageTypeHTTP
env.HTTPResponse = &Response{
RequestID: resp.RequestId,
Status: int(resp.Status),
Header: hdr,
Body: append([]byte(nil), resp.Body...),
Error: resp.Error,
}
env.HTTPRequest = nil
env.StreamOpen = nil
env.StreamData = nil
env.StreamClose = nil
env.StreamAck = nil
return nil
case *protocolpb.Envelope_StreamOpen:
so := payload.StreamOpen
if so == nil {
return fmt.Errorf("protobuf codec: stream_open payload is nil")
}
hdr := make(map[string][]string, len(so.Header))
for k, hv := range so.Header {
if hv == nil {
continue
}
hdr[k] = append([]string(nil), hv.Values...)
}
env.Type = MessageTypeStreamOpen
env.StreamOpen = &StreamOpen{
ID: StreamID(so.Id),
Service: so.ServiceName,
TargetAddr: so.TargetAddr,
Header: hdr,
}
env.StreamData = nil
env.StreamClose = nil
env.StreamAck = nil
env.HTTPRequest = nil
env.HTTPResponse = nil
return nil
case *protocolpb.Envelope_StreamData:
sd := payload.StreamData
if sd == nil {
return fmt.Errorf("protobuf codec: stream_data payload is nil")
}
env.Type = MessageTypeStreamData
env.StreamData = &StreamData{
ID: StreamID(sd.Id),
Seq: sd.Seq,
Data: append([]byte(nil), sd.Data...),
}
env.StreamOpen = nil
env.StreamClose = nil
env.StreamAck = nil
env.HTTPRequest = nil
env.HTTPResponse = nil
return nil
case *protocolpb.Envelope_StreamClose:
sc := payload.StreamClose
if sc == nil {
return fmt.Errorf("protobuf codec: stream_close payload is nil")
}
env.Type = MessageTypeStreamClose
env.StreamClose = &StreamClose{
ID: StreamID(sc.Id),
Error: sc.Error,
}
env.StreamOpen = nil
env.StreamData = nil
env.StreamAck = nil
env.HTTPRequest = nil
env.HTTPResponse = nil
return nil
case *protocolpb.Envelope_StreamAck:
sa := payload.StreamAck
if sa == nil {
return fmt.Errorf("protobuf codec: stream_ack payload is nil")
}
env.Type = MessageTypeStreamAck
env.StreamAck = &StreamAck{
ID: StreamID(sa.Id),
AckSeq: sa.AckSeq,
LostSeqs: append([]uint64(nil), sa.LostSeqs...),
WindowSize: sa.WindowSize,
}
env.StreamOpen = nil
env.StreamData = nil
env.StreamClose = nil
env.HTTPRequest = nil
env.HTTPResponse = nil
return nil
default:
return fmt.Errorf("protobuf codec: unsupported payload type %T", payload)
}
}

View File

@@ -0,0 +1,226 @@
package protocol
import (
"bufio"
"bytes"
"io"
"testing"
)
// mockDatagramConn simulates a datagram-based connection (like DTLS over UDP)
// where each Write sends a separate message and each Read receives a complete message.
// This mock verifies the FIXED behavior where the codec properly handles message boundaries.
type mockDatagramConn struct {
messages [][]byte
readIdx int
}
func newMockDatagramConn() *mockDatagramConn {
return &mockDatagramConn{
messages: make([][]byte, 0),
}
}
func (m *mockDatagramConn) Write(p []byte) (n int, err error) {
// Simulate datagram behavior: each Write is a separate message
msg := make([]byte, len(p))
copy(msg, p)
m.messages = append(m.messages, msg)
return len(p), nil
}
func (m *mockDatagramConn) Read(p []byte) (n int, err error) {
// Simulate datagram behavior: each Read returns a complete message
if m.readIdx >= len(m.messages) {
return 0, io.EOF
}
msg := m.messages[m.readIdx]
m.readIdx++
if len(p) < len(msg) {
return 0, io.ErrShortBuffer
}
copy(p, msg)
return len(msg), nil
}
// TestProtobufCodecDatagramBehavior tests that the protobuf codec works correctly
// with datagram-based transports (like DTLS over UDP) where message boundaries are preserved.
func TestProtobufCodecDatagramBehavior(t *testing.T) {
codec := protobufCodec{}
conn := newMockDatagramConn()
// Create a test envelope
testEnv := &Envelope{
Type: MessageTypeHTTP,
HTTPRequest: &Request{
RequestID: "test-req-123",
ClientID: "client-1",
ServiceName: "test-service",
Method: "GET",
URL: "/test/path",
Header: map[string][]string{
"User-Agent": {"test-client"},
},
Body: []byte("test body content"),
},
}
// Encode the envelope
if err := codec.Encode(conn, testEnv); err != nil {
t.Fatalf("Failed to encode envelope: %v", err)
}
// Verify that exactly one message was written (length prefix + data in single Write)
if len(conn.messages) != 1 {
t.Fatalf("Expected 1 message to be written, got %d", len(conn.messages))
}
// Verify the message structure: [4-byte length][protobuf data]
msg := conn.messages[0]
if len(msg) < 4 {
t.Fatalf("Message too short: %d bytes", len(msg))
}
// Decode the envelope using a buffered reader (as we do in actual code)
// to handle datagram-based reading properly
reader := bufio.NewReaderSize(conn, GetDTLSReadBufferSize())
var decodedEnv Envelope
if err := codec.Decode(reader, &decodedEnv); err != nil {
t.Fatalf("Failed to decode envelope: %v", err)
}
// Verify the decoded envelope matches the original
if decodedEnv.Type != testEnv.Type {
t.Errorf("Type mismatch: got %v, want %v", decodedEnv.Type, testEnv.Type)
}
if decodedEnv.HTTPRequest == nil {
t.Fatal("HTTPRequest is nil after decode")
}
if decodedEnv.HTTPRequest.RequestID != testEnv.HTTPRequest.RequestID {
t.Errorf("RequestID mismatch: got %v, want %v", decodedEnv.HTTPRequest.RequestID, testEnv.HTTPRequest.RequestID)
}
if decodedEnv.HTTPRequest.Method != testEnv.HTTPRequest.Method {
t.Errorf("Method mismatch: got %v, want %v", decodedEnv.HTTPRequest.Method, testEnv.HTTPRequest.Method)
}
if decodedEnv.HTTPRequest.URL != testEnv.HTTPRequest.URL {
t.Errorf("URL mismatch: got %v, want %v", decodedEnv.HTTPRequest.URL, testEnv.HTTPRequest.URL)
}
if !bytes.Equal(decodedEnv.HTTPRequest.Body, testEnv.HTTPRequest.Body) {
t.Errorf("Body mismatch: got %v, want %v", decodedEnv.HTTPRequest.Body, testEnv.HTTPRequest.Body)
}
}
// TestProtobufCodecStreamData tests encoding/decoding of StreamData messages
func TestProtobufCodecStreamData(t *testing.T) {
codec := protobufCodec{}
conn := newMockDatagramConn()
// Create a StreamData envelope
testEnv := &Envelope{
Type: MessageTypeStreamData,
StreamData: &StreamData{
ID: StreamID("stream-123"),
Seq: 42,
Data: []byte("stream data payload"),
},
}
// Encode
if err := codec.Encode(conn, testEnv); err != nil {
t.Fatalf("Failed to encode StreamData: %v", err)
}
// Verify single message
if len(conn.messages) != 1 {
t.Fatalf("Expected 1 message, got %d", len(conn.messages))
}
// Decode using a buffered reader (as we do in actual code)
reader := bufio.NewReaderSize(conn, GetDTLSReadBufferSize())
var decodedEnv Envelope
if err := codec.Decode(reader, &decodedEnv); err != nil {
t.Fatalf("Failed to decode StreamData: %v", err)
}
// Verify
if decodedEnv.Type != MessageTypeStreamData {
t.Errorf("Type mismatch: got %v, want %v", decodedEnv.Type, MessageTypeStreamData)
}
if decodedEnv.StreamData == nil {
t.Fatal("StreamData is nil")
}
if decodedEnv.StreamData.ID != testEnv.StreamData.ID {
t.Errorf("StreamID mismatch: got %v, want %v", decodedEnv.StreamData.ID, testEnv.StreamData.ID)
}
if decodedEnv.StreamData.Seq != testEnv.StreamData.Seq {
t.Errorf("Seq mismatch: got %v, want %v", decodedEnv.StreamData.Seq, testEnv.StreamData.Seq)
}
if !bytes.Equal(decodedEnv.StreamData.Data, testEnv.StreamData.Data) {
t.Errorf("Data mismatch: got %v, want %v", decodedEnv.StreamData.Data, testEnv.StreamData.Data)
}
}
// TestProtobufCodecMultipleMessages tests encoding/decoding multiple messages
func TestProtobufCodecMultipleMessages(t *testing.T) {
codec := protobufCodec{}
conn := newMockDatagramConn()
// Create multiple test envelopes
envelopes := []*Envelope{
{
Type: MessageTypeStreamOpen,
StreamOpen: &StreamOpen{
ID: StreamID("stream-1"),
Service: "test-service",
TargetAddr: "127.0.0.1:8080",
},
},
{
Type: MessageTypeStreamData,
StreamData: &StreamData{
ID: StreamID("stream-1"),
Seq: 1,
Data: []byte("first chunk"),
},
},
{
Type: MessageTypeStreamData,
StreamData: &StreamData{
ID: StreamID("stream-1"),
Seq: 2,
Data: []byte("second chunk"),
},
},
{
Type: MessageTypeStreamClose,
StreamClose: &StreamClose{
ID: StreamID("stream-1"),
Error: "",
},
},
}
// Encode all messages
for i, env := range envelopes {
if err := codec.Encode(conn, env); err != nil {
t.Fatalf("Failed to encode message %d: %v", i, err)
}
}
// Verify that each encode produced exactly one message
if len(conn.messages) != len(envelopes) {
t.Fatalf("Expected %d messages, got %d", len(envelopes), len(conn.messages))
}
// Decode and verify all messages using a buffered reader (as we do in actual code)
reader := bufio.NewReaderSize(conn, GetDTLSReadBufferSize())
for i := 0; i < len(envelopes); i++ {
var decoded Envelope
if err := codec.Decode(reader, &decoded); err != nil {
t.Fatalf("Failed to decode message %d: %v", i, err)
}
if decoded.Type != envelopes[i].Type {
t.Errorf("Message %d type mismatch: got %v, want %v", i, decoded.Type, envelopes[i].Type)
}
}
}

View File

@@ -0,0 +1,103 @@
syntax = "proto3";
package hopgate.protocol.v1;
option go_package = "internal/protocol/pb;pb";
// HeaderValues 는 HTTP 헤더의 다중 값 표현을 위한 래퍼입니다.
// HeaderValues wraps multiple header values for a single HTTP header key.
message HeaderValues {
repeated string values = 1;
}
// Request 는 DTLS 터널 위에서 교환되는 HTTP 요청을 표현합니다.
// This mirrors internal/protocol.Request.
message Request {
string request_id = 1;
string client_id = 2; // optional client identifier
string service_name = 3; // logical service name on the client side
string method = 4;
string url = 5;
// HTTP header: map of key -> multiple values.
map<string, HeaderValues> header = 6;
// Raw HTTP body bytes.
bytes body = 7;
}
// Response 는 DTLS 터널 위에서 교환되는 HTTP 응답을 표현합니다.
// This mirrors internal/protocol.Response.
message Response {
string request_id = 1;
int32 status = 2;
// HTTP header.
map<string, HeaderValues> header = 3;
// Raw HTTP body bytes.
bytes body = 4;
// Optional error description when tunneling fails.
string error = 5;
}
// StreamOpen 은 새로운 스트림(HTTP 요청/응답, WebSocket 등)을 여는 메시지입니다.
// This represents opening a new stream (HTTP request/response, WebSocket, etc.).
message StreamOpen {
string id = 1; // StreamID (text form)
// Which logical service / local target to use on the client side.
string service_name = 2;
string target_addr = 3; // e.g. "127.0.0.1:8080"
// Initial HTTP-like headers (including Upgrade, etc.).
map<string, HeaderValues> header = 4;
}
// StreamData 는 이미 열린 스트림에 대한 단방향 데이터 프레임입니다.
// This is a unidirectional data frame on an already-open stream.
message StreamData {
string id = 1; // StreamID
uint64 seq = 2; // per-stream sequence number starting from 0
bytes data = 3;
}
// StreamAck 는 StreamData 에 대한 ACK/NACK 및 선택적 재전송 힌트를 전달합니다.
// This conveys ACK/NACK and optional retransmission hints for StreamData.
message StreamAck {
string id = 1;
// Last contiguously received sequence number (starting from 0).
uint64 ack_seq = 2;
// Additional missing sequence numbers beyond ack_seq (optional).
repeated uint64 lost_seqs = 3;
// Optional receive window size hint.
uint32 window_size = 4;
}
// StreamClose 는 스트림 종료(정상/에러)를 알립니다.
// This indicates normal or error termination of a stream.
message StreamClose {
string id = 1;
string error = 2; // empty means normal close
}
// Envelope 는 DTLS 세션 위에서 교환되는 상위 레벨 메시지 컨테이너입니다.
// 하나의 Envelope 에는 HTTP 요청/응답 또는 스트림 관련 메시지 중 하나만 포함됩니다.
// Envelope is the top-level container exchanged over the DTLS session.
// Exactly one payload (http_request/http_response/stream_*) is set per message.
message Envelope {
oneof payload {
Request http_request = 1;
Response http_response = 2;
StreamOpen stream_open = 3;
StreamData stream_data = 4;
StreamClose stream_close = 5;
StreamAck stream_ack = 6;
}
}

View File

@@ -0,0 +1,799 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc v6.33.1
// source: internal/protocol/hopgate_stream.proto
package pb
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// HeaderValues 는 HTTP 헤더의 다중 값 표현을 위한 래퍼입니다.
// HeaderValues wraps multiple header values for a single HTTP header key.
type HeaderValues struct {
state protoimpl.MessageState `protogen:"open.v1"`
Values []string `protobuf:"bytes,1,rep,name=values,proto3" json:"values,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *HeaderValues) Reset() {
*x = HeaderValues{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *HeaderValues) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*HeaderValues) ProtoMessage() {}
func (x *HeaderValues) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use HeaderValues.ProtoReflect.Descriptor instead.
func (*HeaderValues) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{0}
}
func (x *HeaderValues) GetValues() []string {
if x != nil {
return x.Values
}
return nil
}
// Request 는 DTLS 터널 위에서 교환되는 HTTP 요청을 표현합니다.
// This mirrors internal/protocol.Request.
type Request struct {
state protoimpl.MessageState `protogen:"open.v1"`
RequestId string `protobuf:"bytes,1,opt,name=request_id,json=requestId,proto3" json:"request_id,omitempty"`
ClientId string `protobuf:"bytes,2,opt,name=client_id,json=clientId,proto3" json:"client_id,omitempty"` // optional client identifier
ServiceName string `protobuf:"bytes,3,opt,name=service_name,json=serviceName,proto3" json:"service_name,omitempty"` // logical service name on the client side
Method string `protobuf:"bytes,4,opt,name=method,proto3" json:"method,omitempty"`
Url string `protobuf:"bytes,5,opt,name=url,proto3" json:"url,omitempty"`
// HTTP header: map of key -> multiple values.
Header map[string]*HeaderValues `protobuf:"bytes,6,rep,name=header,proto3" json:"header,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
// Raw HTTP body bytes.
Body []byte `protobuf:"bytes,7,opt,name=body,proto3" json:"body,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Request) Reset() {
*x = Request{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Request) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Request) ProtoMessage() {}
func (x *Request) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Request.ProtoReflect.Descriptor instead.
func (*Request) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{1}
}
func (x *Request) GetRequestId() string {
if x != nil {
return x.RequestId
}
return ""
}
func (x *Request) GetClientId() string {
if x != nil {
return x.ClientId
}
return ""
}
func (x *Request) GetServiceName() string {
if x != nil {
return x.ServiceName
}
return ""
}
func (x *Request) GetMethod() string {
if x != nil {
return x.Method
}
return ""
}
func (x *Request) GetUrl() string {
if x != nil {
return x.Url
}
return ""
}
func (x *Request) GetHeader() map[string]*HeaderValues {
if x != nil {
return x.Header
}
return nil
}
func (x *Request) GetBody() []byte {
if x != nil {
return x.Body
}
return nil
}
// Response 는 DTLS 터널 위에서 교환되는 HTTP 응답을 표현합니다.
// This mirrors internal/protocol.Response.
type Response struct {
state protoimpl.MessageState `protogen:"open.v1"`
RequestId string `protobuf:"bytes,1,opt,name=request_id,json=requestId,proto3" json:"request_id,omitempty"`
Status int32 `protobuf:"varint,2,opt,name=status,proto3" json:"status,omitempty"`
// HTTP header.
Header map[string]*HeaderValues `protobuf:"bytes,3,rep,name=header,proto3" json:"header,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
// Raw HTTP body bytes.
Body []byte `protobuf:"bytes,4,opt,name=body,proto3" json:"body,omitempty"`
// Optional error description when tunneling fails.
Error string `protobuf:"bytes,5,opt,name=error,proto3" json:"error,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Response) Reset() {
*x = Response{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Response) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Response) ProtoMessage() {}
func (x *Response) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Response.ProtoReflect.Descriptor instead.
func (*Response) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{2}
}
func (x *Response) GetRequestId() string {
if x != nil {
return x.RequestId
}
return ""
}
func (x *Response) GetStatus() int32 {
if x != nil {
return x.Status
}
return 0
}
func (x *Response) GetHeader() map[string]*HeaderValues {
if x != nil {
return x.Header
}
return nil
}
func (x *Response) GetBody() []byte {
if x != nil {
return x.Body
}
return nil
}
func (x *Response) GetError() string {
if x != nil {
return x.Error
}
return ""
}
// StreamOpen 은 새로운 스트림(HTTP 요청/응답, WebSocket 등)을 여는 메시지입니다.
// This represents opening a new stream (HTTP request/response, WebSocket, etc.).
type StreamOpen struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"` // StreamID (text form)
// Which logical service / local target to use on the client side.
ServiceName string `protobuf:"bytes,2,opt,name=service_name,json=serviceName,proto3" json:"service_name,omitempty"`
TargetAddr string `protobuf:"bytes,3,opt,name=target_addr,json=targetAddr,proto3" json:"target_addr,omitempty"` // e.g. "127.0.0.1:8080"
// Initial HTTP-like headers (including Upgrade, etc.).
Header map[string]*HeaderValues `protobuf:"bytes,4,rep,name=header,proto3" json:"header,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *StreamOpen) Reset() {
*x = StreamOpen{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *StreamOpen) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*StreamOpen) ProtoMessage() {}
func (x *StreamOpen) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use StreamOpen.ProtoReflect.Descriptor instead.
func (*StreamOpen) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{3}
}
func (x *StreamOpen) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *StreamOpen) GetServiceName() string {
if x != nil {
return x.ServiceName
}
return ""
}
func (x *StreamOpen) GetTargetAddr() string {
if x != nil {
return x.TargetAddr
}
return ""
}
func (x *StreamOpen) GetHeader() map[string]*HeaderValues {
if x != nil {
return x.Header
}
return nil
}
// StreamData 는 이미 열린 스트림에 대한 단방향 데이터 프레임입니다.
// This is a unidirectional data frame on an already-open stream.
type StreamData struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"` // StreamID
Seq uint64 `protobuf:"varint,2,opt,name=seq,proto3" json:"seq,omitempty"` // per-stream sequence number starting from 0
Data []byte `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *StreamData) Reset() {
*x = StreamData{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *StreamData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*StreamData) ProtoMessage() {}
func (x *StreamData) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use StreamData.ProtoReflect.Descriptor instead.
func (*StreamData) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{4}
}
func (x *StreamData) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *StreamData) GetSeq() uint64 {
if x != nil {
return x.Seq
}
return 0
}
func (x *StreamData) GetData() []byte {
if x != nil {
return x.Data
}
return nil
}
// StreamAck 는 StreamData 에 대한 ACK/NACK 및 선택적 재전송 힌트를 전달합니다.
// This conveys ACK/NACK and optional retransmission hints for StreamData.
type StreamAck struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
// Last contiguously received sequence number (starting from 0).
AckSeq uint64 `protobuf:"varint,2,opt,name=ack_seq,json=ackSeq,proto3" json:"ack_seq,omitempty"`
// Additional missing sequence numbers beyond ack_seq (optional).
LostSeqs []uint64 `protobuf:"varint,3,rep,packed,name=lost_seqs,json=lostSeqs,proto3" json:"lost_seqs,omitempty"`
// Optional receive window size hint.
WindowSize uint32 `protobuf:"varint,4,opt,name=window_size,json=windowSize,proto3" json:"window_size,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *StreamAck) Reset() {
*x = StreamAck{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *StreamAck) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*StreamAck) ProtoMessage() {}
func (x *StreamAck) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use StreamAck.ProtoReflect.Descriptor instead.
func (*StreamAck) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{5}
}
func (x *StreamAck) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *StreamAck) GetAckSeq() uint64 {
if x != nil {
return x.AckSeq
}
return 0
}
func (x *StreamAck) GetLostSeqs() []uint64 {
if x != nil {
return x.LostSeqs
}
return nil
}
func (x *StreamAck) GetWindowSize() uint32 {
if x != nil {
return x.WindowSize
}
return 0
}
// StreamClose 는 스트림 종료(정상/에러)를 알립니다.
// This indicates normal or error termination of a stream.
type StreamClose struct {
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error,proto3" json:"error,omitempty"` // empty means normal close
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *StreamClose) Reset() {
*x = StreamClose{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *StreamClose) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*StreamClose) ProtoMessage() {}
func (x *StreamClose) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use StreamClose.ProtoReflect.Descriptor instead.
func (*StreamClose) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{6}
}
func (x *StreamClose) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *StreamClose) GetError() string {
if x != nil {
return x.Error
}
return ""
}
// Envelope 는 DTLS 세션 위에서 교환되는 상위 레벨 메시지 컨테이너입니다.
// 하나의 Envelope 에는 HTTP 요청/응답 또는 스트림 관련 메시지 중 하나만 포함됩니다.
// Envelope is the top-level container exchanged over the DTLS session.
// Exactly one payload (http_request/http_response/stream_*) is set per message.
type Envelope struct {
state protoimpl.MessageState `protogen:"open.v1"`
// Types that are valid to be assigned to Payload:
//
// *Envelope_HttpRequest
// *Envelope_HttpResponse
// *Envelope_StreamOpen
// *Envelope_StreamData
// *Envelope_StreamClose
// *Envelope_StreamAck
Payload isEnvelope_Payload `protobuf_oneof:"payload"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Envelope) Reset() {
*x = Envelope{}
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Envelope) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Envelope) ProtoMessage() {}
func (x *Envelope) ProtoReflect() protoreflect.Message {
mi := &file_internal_protocol_hopgate_stream_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Envelope.ProtoReflect.Descriptor instead.
func (*Envelope) Descriptor() ([]byte, []int) {
return file_internal_protocol_hopgate_stream_proto_rawDescGZIP(), []int{7}
}
func (x *Envelope) GetPayload() isEnvelope_Payload {
if x != nil {
return x.Payload
}
return nil
}
func (x *Envelope) GetHttpRequest() *Request {
if x != nil {
if x, ok := x.Payload.(*Envelope_HttpRequest); ok {
return x.HttpRequest
}
}
return nil
}
func (x *Envelope) GetHttpResponse() *Response {
if x != nil {
if x, ok := x.Payload.(*Envelope_HttpResponse); ok {
return x.HttpResponse
}
}
return nil
}
func (x *Envelope) GetStreamOpen() *StreamOpen {
if x != nil {
if x, ok := x.Payload.(*Envelope_StreamOpen); ok {
return x.StreamOpen
}
}
return nil
}
func (x *Envelope) GetStreamData() *StreamData {
if x != nil {
if x, ok := x.Payload.(*Envelope_StreamData); ok {
return x.StreamData
}
}
return nil
}
func (x *Envelope) GetStreamClose() *StreamClose {
if x != nil {
if x, ok := x.Payload.(*Envelope_StreamClose); ok {
return x.StreamClose
}
}
return nil
}
func (x *Envelope) GetStreamAck() *StreamAck {
if x != nil {
if x, ok := x.Payload.(*Envelope_StreamAck); ok {
return x.StreamAck
}
}
return nil
}
type isEnvelope_Payload interface {
isEnvelope_Payload()
}
type Envelope_HttpRequest struct {
HttpRequest *Request `protobuf:"bytes,1,opt,name=http_request,json=httpRequest,proto3,oneof"`
}
type Envelope_HttpResponse struct {
HttpResponse *Response `protobuf:"bytes,2,opt,name=http_response,json=httpResponse,proto3,oneof"`
}
type Envelope_StreamOpen struct {
StreamOpen *StreamOpen `protobuf:"bytes,3,opt,name=stream_open,json=streamOpen,proto3,oneof"`
}
type Envelope_StreamData struct {
StreamData *StreamData `protobuf:"bytes,4,opt,name=stream_data,json=streamData,proto3,oneof"`
}
type Envelope_StreamClose struct {
StreamClose *StreamClose `protobuf:"bytes,5,opt,name=stream_close,json=streamClose,proto3,oneof"`
}
type Envelope_StreamAck struct {
StreamAck *StreamAck `protobuf:"bytes,6,opt,name=stream_ack,json=streamAck,proto3,oneof"`
}
func (*Envelope_HttpRequest) isEnvelope_Payload() {}
func (*Envelope_HttpResponse) isEnvelope_Payload() {}
func (*Envelope_StreamOpen) isEnvelope_Payload() {}
func (*Envelope_StreamData) isEnvelope_Payload() {}
func (*Envelope_StreamClose) isEnvelope_Payload() {}
func (*Envelope_StreamAck) isEnvelope_Payload() {}
var File_internal_protocol_hopgate_stream_proto protoreflect.FileDescriptor
const file_internal_protocol_hopgate_stream_proto_rawDesc = "" +
"\n" +
"&internal/protocol/hopgate_stream.proto\x12\x13hopgate.protocol.v1\"&\n" +
"\fHeaderValues\x12\x16\n" +
"\x06values\x18\x01 \x03(\tR\x06values\"\xc6\x02\n" +
"\aRequest\x12\x1d\n" +
"\n" +
"request_id\x18\x01 \x01(\tR\trequestId\x12\x1b\n" +
"\tclient_id\x18\x02 \x01(\tR\bclientId\x12!\n" +
"\fservice_name\x18\x03 \x01(\tR\vserviceName\x12\x16\n" +
"\x06method\x18\x04 \x01(\tR\x06method\x12\x10\n" +
"\x03url\x18\x05 \x01(\tR\x03url\x12@\n" +
"\x06header\x18\x06 \x03(\v2(.hopgate.protocol.v1.Request.HeaderEntryR\x06header\x12\x12\n" +
"\x04body\x18\a \x01(\fR\x04body\x1a\\\n" +
"\vHeaderEntry\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x127\n" +
"\x05value\x18\x02 \x01(\v2!.hopgate.protocol.v1.HeaderValuesR\x05value:\x028\x01\"\x8c\x02\n" +
"\bResponse\x12\x1d\n" +
"\n" +
"request_id\x18\x01 \x01(\tR\trequestId\x12\x16\n" +
"\x06status\x18\x02 \x01(\x05R\x06status\x12A\n" +
"\x06header\x18\x03 \x03(\v2).hopgate.protocol.v1.Response.HeaderEntryR\x06header\x12\x12\n" +
"\x04body\x18\x04 \x01(\fR\x04body\x12\x14\n" +
"\x05error\x18\x05 \x01(\tR\x05error\x1a\\\n" +
"\vHeaderEntry\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x127\n" +
"\x05value\x18\x02 \x01(\v2!.hopgate.protocol.v1.HeaderValuesR\x05value:\x028\x01\"\x83\x02\n" +
"\n" +
"StreamOpen\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\x12!\n" +
"\fservice_name\x18\x02 \x01(\tR\vserviceName\x12\x1f\n" +
"\vtarget_addr\x18\x03 \x01(\tR\n" +
"targetAddr\x12C\n" +
"\x06header\x18\x04 \x03(\v2+.hopgate.protocol.v1.StreamOpen.HeaderEntryR\x06header\x1a\\\n" +
"\vHeaderEntry\x12\x10\n" +
"\x03key\x18\x01 \x01(\tR\x03key\x127\n" +
"\x05value\x18\x02 \x01(\v2!.hopgate.protocol.v1.HeaderValuesR\x05value:\x028\x01\"B\n" +
"\n" +
"StreamData\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\x12\x10\n" +
"\x03seq\x18\x02 \x01(\x04R\x03seq\x12\x12\n" +
"\x04data\x18\x03 \x01(\fR\x04data\"r\n" +
"\tStreamAck\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\x12\x17\n" +
"\aack_seq\x18\x02 \x01(\x04R\x06ackSeq\x12\x1b\n" +
"\tlost_seqs\x18\x03 \x03(\x04R\blostSeqs\x12\x1f\n" +
"\vwindow_size\x18\x04 \x01(\rR\n" +
"windowSize\"3\n" +
"\vStreamClose\x12\x0e\n" +
"\x02id\x18\x01 \x01(\tR\x02id\x12\x14\n" +
"\x05error\x18\x02 \x01(\tR\x05error\"\xae\x03\n" +
"\bEnvelope\x12A\n" +
"\fhttp_request\x18\x01 \x01(\v2\x1c.hopgate.protocol.v1.RequestH\x00R\vhttpRequest\x12D\n" +
"\rhttp_response\x18\x02 \x01(\v2\x1d.hopgate.protocol.v1.ResponseH\x00R\fhttpResponse\x12B\n" +
"\vstream_open\x18\x03 \x01(\v2\x1f.hopgate.protocol.v1.StreamOpenH\x00R\n" +
"streamOpen\x12B\n" +
"\vstream_data\x18\x04 \x01(\v2\x1f.hopgate.protocol.v1.StreamDataH\x00R\n" +
"streamData\x12E\n" +
"\fstream_close\x18\x05 \x01(\v2 .hopgate.protocol.v1.StreamCloseH\x00R\vstreamClose\x12?\n" +
"\n" +
"stream_ack\x18\x06 \x01(\v2\x1e.hopgate.protocol.v1.StreamAckH\x00R\tstreamAckB\t\n" +
"\apayloadB\x19Z\x17internal/protocol/pb;pbb\x06proto3"
var (
file_internal_protocol_hopgate_stream_proto_rawDescOnce sync.Once
file_internal_protocol_hopgate_stream_proto_rawDescData []byte
)
func file_internal_protocol_hopgate_stream_proto_rawDescGZIP() []byte {
file_internal_protocol_hopgate_stream_proto_rawDescOnce.Do(func() {
file_internal_protocol_hopgate_stream_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_internal_protocol_hopgate_stream_proto_rawDesc), len(file_internal_protocol_hopgate_stream_proto_rawDesc)))
})
return file_internal_protocol_hopgate_stream_proto_rawDescData
}
var file_internal_protocol_hopgate_stream_proto_msgTypes = make([]protoimpl.MessageInfo, 11)
var file_internal_protocol_hopgate_stream_proto_goTypes = []any{
(*HeaderValues)(nil), // 0: hopgate.protocol.v1.HeaderValues
(*Request)(nil), // 1: hopgate.protocol.v1.Request
(*Response)(nil), // 2: hopgate.protocol.v1.Response
(*StreamOpen)(nil), // 3: hopgate.protocol.v1.StreamOpen
(*StreamData)(nil), // 4: hopgate.protocol.v1.StreamData
(*StreamAck)(nil), // 5: hopgate.protocol.v1.StreamAck
(*StreamClose)(nil), // 6: hopgate.protocol.v1.StreamClose
(*Envelope)(nil), // 7: hopgate.protocol.v1.Envelope
nil, // 8: hopgate.protocol.v1.Request.HeaderEntry
nil, // 9: hopgate.protocol.v1.Response.HeaderEntry
nil, // 10: hopgate.protocol.v1.StreamOpen.HeaderEntry
}
var file_internal_protocol_hopgate_stream_proto_depIdxs = []int32{
8, // 0: hopgate.protocol.v1.Request.header:type_name -> hopgate.protocol.v1.Request.HeaderEntry
9, // 1: hopgate.protocol.v1.Response.header:type_name -> hopgate.protocol.v1.Response.HeaderEntry
10, // 2: hopgate.protocol.v1.StreamOpen.header:type_name -> hopgate.protocol.v1.StreamOpen.HeaderEntry
1, // 3: hopgate.protocol.v1.Envelope.http_request:type_name -> hopgate.protocol.v1.Request
2, // 4: hopgate.protocol.v1.Envelope.http_response:type_name -> hopgate.protocol.v1.Response
3, // 5: hopgate.protocol.v1.Envelope.stream_open:type_name -> hopgate.protocol.v1.StreamOpen
4, // 6: hopgate.protocol.v1.Envelope.stream_data:type_name -> hopgate.protocol.v1.StreamData
6, // 7: hopgate.protocol.v1.Envelope.stream_close:type_name -> hopgate.protocol.v1.StreamClose
5, // 8: hopgate.protocol.v1.Envelope.stream_ack:type_name -> hopgate.protocol.v1.StreamAck
0, // 9: hopgate.protocol.v1.Request.HeaderEntry.value:type_name -> hopgate.protocol.v1.HeaderValues
0, // 10: hopgate.protocol.v1.Response.HeaderEntry.value:type_name -> hopgate.protocol.v1.HeaderValues
0, // 11: hopgate.protocol.v1.StreamOpen.HeaderEntry.value:type_name -> hopgate.protocol.v1.HeaderValues
12, // [12:12] is the sub-list for method output_type
12, // [12:12] is the sub-list for method input_type
12, // [12:12] is the sub-list for extension type_name
12, // [12:12] is the sub-list for extension extendee
0, // [0:12] is the sub-list for field type_name
}
func init() { file_internal_protocol_hopgate_stream_proto_init() }
func file_internal_protocol_hopgate_stream_proto_init() {
if File_internal_protocol_hopgate_stream_proto != nil {
return
}
file_internal_protocol_hopgate_stream_proto_msgTypes[7].OneofWrappers = []any{
(*Envelope_HttpRequest)(nil),
(*Envelope_HttpResponse)(nil),
(*Envelope_StreamOpen)(nil),
(*Envelope_StreamData)(nil),
(*Envelope_StreamClose)(nil),
(*Envelope_StreamAck)(nil),
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_internal_protocol_hopgate_stream_proto_rawDesc), len(file_internal_protocol_hopgate_stream_proto_rawDesc)),
NumEnums: 0,
NumMessages: 11,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_internal_protocol_hopgate_stream_proto_goTypes,
DependencyIndexes: file_internal_protocol_hopgate_stream_proto_depIdxs,
MessageInfos: file_internal_protocol_hopgate_stream_proto_msgTypes,
}.Build()
File_internal_protocol_hopgate_stream_proto = out.File
file_internal_protocol_hopgate_stream_proto_goTypes = nil
file_internal_protocol_hopgate_stream_proto_depIdxs = nil
}

View File

@@ -0,0 +1,119 @@
package pb
import (
"context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// HopGateTunnelClient is the client API for the HopGateTunnel service.
type HopGateTunnelClient interface {
// OpenTunnel establishes a long-lived bi-directional stream between
// a HopGate client and the server. Both HTTP requests and responses
// are multiplexed as Envelope messages on this stream.
OpenTunnel(ctx context.Context, opts ...grpc.CallOption) (HopGateTunnel_OpenTunnelClient, error)
}
type hopGateTunnelClient struct {
cc grpc.ClientConnInterface
}
// NewHopGateTunnelClient creates a new HopGateTunnelClient.
func NewHopGateTunnelClient(cc grpc.ClientConnInterface) HopGateTunnelClient {
return &hopGateTunnelClient{cc: cc}
}
func (c *hopGateTunnelClient) OpenTunnel(ctx context.Context, opts ...grpc.CallOption) (HopGateTunnel_OpenTunnelClient, error) {
stream, err := c.cc.NewStream(ctx, &_HopGateTunnel_serviceDesc.Streams[0], "/hopgate.protocol.v1.HopGateTunnel/OpenTunnel", opts...)
if err != nil {
return nil, err
}
return &hopGateTunnelOpenTunnelClient{ClientStream: stream}, nil
}
// HopGateTunnel_OpenTunnelClient is the client-side stream for OpenTunnel.
type HopGateTunnel_OpenTunnelClient interface {
Send(*Envelope) error
Recv() (*Envelope, error)
grpc.ClientStream
}
type hopGateTunnelOpenTunnelClient struct {
grpc.ClientStream
}
func (x *hopGateTunnelOpenTunnelClient) Send(m *Envelope) error {
return x.ClientStream.SendMsg(m)
}
func (x *hopGateTunnelOpenTunnelClient) Recv() (*Envelope, error) {
m := new(Envelope)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// HopGateTunnelServer is the server API for the HopGateTunnel service.
type HopGateTunnelServer interface {
// OpenTunnel handles a long-lived bi-directional stream between the server
// and a HopGate client. Implementations are responsible for reading and
// writing Envelope messages on the stream.
OpenTunnel(HopGateTunnel_OpenTunnelServer) error
}
// UnimplementedHopGateTunnelServer can be embedded to have forward compatible implementations.
type UnimplementedHopGateTunnelServer struct{}
// OpenTunnel returns an Unimplemented error by default.
func (UnimplementedHopGateTunnelServer) OpenTunnel(HopGateTunnel_OpenTunnelServer) error {
return status.Errorf(codes.Unimplemented, "method OpenTunnel not implemented")
}
// RegisterHopGateTunnelServer registers the HopGateTunnel service with the given gRPC server.
func RegisterHopGateTunnelServer(s grpc.ServiceRegistrar, srv HopGateTunnelServer) {
s.RegisterService(&_HopGateTunnel_serviceDesc, srv)
}
// HopGateTunnel_OpenTunnelServer is the server-side stream for OpenTunnel.
type HopGateTunnel_OpenTunnelServer interface {
Send(*Envelope) error
Recv() (*Envelope, error)
grpc.ServerStream
}
func _HopGateTunnel_OpenTunnel_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(HopGateTunnelServer).OpenTunnel(&hopGateTunnelOpenTunnelServer{ServerStream: stream})
}
type hopGateTunnelOpenTunnelServer struct {
grpc.ServerStream
}
func (x *hopGateTunnelOpenTunnelServer) Send(m *Envelope) error {
return x.ServerStream.SendMsg(m)
}
func (x *hopGateTunnelOpenTunnelServer) Recv() (*Envelope, error) {
m := new(Envelope)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
var _HopGateTunnel_serviceDesc = grpc.ServiceDesc{
ServiceName: "hopgate.protocol.v1.HopGateTunnel",
HandlerType: (*HopGateTunnelServer)(nil),
Streams: []grpc.StreamDesc{
{
StreamName: "OpenTunnel",
Handler: _HopGateTunnel_OpenTunnel_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "internal/protocol/hopgate_stream.proto",
}

View File

@@ -32,6 +32,11 @@ type Response struct {
// MessageType 은 DTLS 위에서 교환되는 상위 레벨 메시지 종류를 나타냅니다.
type MessageType string
// StreamChunkSize 는 스트림 터널링 시 단일 StreamData 프레임에 담을 최대 payload 크기입니다.
// 현재 구현에서는 4KiB 로 고정하여 DTLS/UDP MTU 한계를 여유 있게 피하도록 합니다.
// StreamChunkSize is the maximum payload size per StreamData frame (4KiB).
const StreamChunkSize = 4 * 1024
const (
// MessageTypeHTTP 는 기존 단일 HTTP 요청/응답 메시지를 의미합니다.
// 이 경우 HTTPRequest / HTTPResponse 필드를 사용합니다.
@@ -41,10 +46,17 @@ const (
MessageTypeStreamOpen MessageType = "stream_open"
// MessageTypeStreamData 는 열린 스트림에 대한 양방향 데이터 프레임을 의미합니다.
// HTTP 바디 chunk 를 비롯한 실제 payload 는 이 타입을 통해 전송됩니다.
// Stream data frames for an already-opened stream (HTTP body chunks, etc.).
MessageTypeStreamData MessageType = "stream_data"
// MessageTypeStreamClose 는 스트림 종료(정상/에러)를 의미합니다.
// Normal or error-termination of a stream.
MessageTypeStreamClose MessageType = "stream_close"
// MessageTypeStreamAck 는 스트림 데이터 프레임에 대한 ACK/NACK 및 재전송 힌트를 전달합니다.
// Stream-level ACK/NACK frames for selective retransmission hints.
MessageTypeStreamAck MessageType = "stream_ack"
)
// Envelope 는 DTLS 세션 위에서 교환되는 상위 레벨 메시지 컨테이너입니다.
@@ -60,11 +72,24 @@ type Envelope struct {
StreamOpen *StreamOpen `json:"stream_open,omitempty"`
StreamData *StreamData `json:"stream_data,omitempty"`
StreamClose *StreamClose `json:"stream_close,omitempty"`
// 스트림 제어 메시지 (ACK/NACK, 재전송 힌트 등)
// Stream-level control messages (ACK/NACK, retransmission hints, etc.).
StreamAck *StreamAck `json:"stream_ack,omitempty"`
}
// StreamID 는 스트림(예: 특정 WebSocket 연결 또는 TCP 커넥션)을 구분하기 위한 식별자입니다.
type StreamID string
// HTTP-over-stream 터널링에서 사용되는 pseudo-header 키 상수입니다.
// These pseudo-header keys are used when tunneling HTTP over the stream protocol.
const (
HeaderKeyMethod = "X-HopGate-Method"
HeaderKeyURL = "X-HopGate-URL"
HeaderKeyHost = "X-HopGate-Host"
HeaderKeyStatus = "X-HopGate-Status"
)
// StreamOpen 은 새로운 스트림을 여는 요청을 나타냅니다.
type StreamOpen struct {
ID StreamID `json:"id"`
@@ -77,11 +102,44 @@ type StreamOpen struct {
}
// StreamData 는 이미 열린 스트림에 대해 한 방향으로 전송되는 데이터 프레임을 표현합니다.
// DTLS/UDP 특성상 손실/중복/순서 뒤바뀜을 감지하고 재전송할 수 있도록
// 각 스트림 내에서 0부터 시작하는 시퀀스 번호(Seq)를 포함합니다.
//
// StreamData represents a unidirectional data frame on an already-opened stream.
// To support loss/duplication/reordering detection and retransmission over DTLS/UDP,
// it carries a per-stream sequence number (Seq) starting from 0.
type StreamData struct {
ID StreamID `json:"id"`
Seq uint64 `json:"seq"`
Data []byte `json:"data"`
}
// StreamAck 는 스트림 데이터 프레임에 대한 ACK/NACK 및 선택적 재전송 요청 정보를 전달합니다.
// AckSeq 는 수신 측에서 "연속적으로" 수신 완료한 마지막 Seq 를 의미하며,
// LostSeqs 는 그 이후 구간에서 누락된 시퀀스 번호(선택적)를 나타냅니다.
//
// StreamAck conveys ACK/NACK and optional retransmission hints for stream data frames.
// AckSeq denotes the last sequence number received contiguously by the receiver,
// while LostSeqs can list additional missing sequence numbers beyond AckSeq.
type StreamAck struct {
ID StreamID `json:"id"`
// AckSeq 는 수신 측에서 0부터 시작해 연속으로 수신 완료한 마지막 Seq 입니다.
// AckSeq is the last contiguously received sequence number starting from 0.
AckSeq uint64 `json:"ack_seq"`
// LostSeqs 는 AckSeq 이후 구간에서 누락된 시퀀스 번호 목록입니다(선택).
// 이 필드는 선택적 selective retransmission 힌트를 제공하기 위해 사용됩니다.
//
// LostSeqs is an optional list of missing sequence numbers beyond AckSeq,
// used as a hint for selective retransmission.
LostSeqs []uint64 `json:"lost_seqs,omitempty"`
// WindowSize 는 수신 측이 허용 가능한 in-flight 프레임 수를 나타내는 선택적 힌트입니다.
// WindowSize is an optional hint for the allowed number of in-flight frames.
WindowSize uint32 `json:"window_size,omitempty"`
}
// StreamClose 는 스트림 종료를 알리는 메시지입니다.
type StreamClose struct {
ID StreamID `json:"id"`

File diff suppressed because it is too large Load Diff

View File

@@ -11,8 +11,10 @@ This document tracks implementation progress against the HopGate architecture an
Architecture and README are documented in both Korean and English.
- 서버/클라이언트 엔트리 포인트, DTLS 핸드셰이크, 기본 PostgreSQL/ent 스키마까지 1차 뼈대 구현 완료.
First skeleton implementation is done for server/client entrypoints, DTLS handshake, and basic PostgreSQL/ent schema.
- 실제 Proxy 동작(HTTP ↔ DTLS 터널링), Admin API 비즈니스 로직, ACME 연동 등은 아직 남아 있음.
Actual proxying (HTTP ↔ DTLS tunneling), admin API business logic, and real ACME integration are still pending.
- 기본 Proxy 동작(HTTP ↔ DTLS 터널링), Admin API 비즈니스 로직, ACME 기반 인증서 관리는 구현 완료된 상태.
Core proxying (HTTP ↔ DTLS tunneling), admin API business logic, and ACME-based certificate management are implemented.
- 스트림 ARQ, Observability, Hardening, ACME 고급 전략 등은 아직 남아 있는 다음 단계 작업이다.
Stream-level ARQ, observability, hardening, and advanced ACME operational strategies remain as next-step work items.
---
@@ -44,7 +46,7 @@ This document tracks implementation progress against the HopGate architecture an
- PostgreSQL 연결 및 ent 스키마 init (`store.OpenPostgresFromEnv`).
- Debug 모드 시 self-signed localhost cert 생성 (`dtls.NewSelfSignedLocalhostConfig`).
- DTLS 서버 생성 (`dtls.NewPionServer`) 및 Accept + Handshake 루프 (`PerformServerHandshake`).
- DummyDomainValidator 사용해 도메인/API Key 조합을 임시로 모두 허용.
- ent 기반 `DomainValidator` + `domainGateValidator` 사용해 `(domain, client_api_key)` 조합과 DNS/IP(옵션) 검증을 수행.
- 클라이언트 메인: [`cmd/client/main.go`](cmd/client/main.go)
- CLI + env 병합 설정 (우선순위: CLI > env).
@@ -121,7 +123,7 @@ This document tracks implementation progress against the HopGate architecture an
- `POST /api/v1/admin/domains/register`
- `POST /api/v1/admin/domains/unregister`
- JSON request/response 구조 정의 및 기본 에러 처리.
- 아직 실제 서비스/라우터 wiring, ent 기반 구현 미완성.
- 실제 서비스(`DomainService`) 및 라우터 wiring, ent 기반 구현이 완료되어 도메인 등록/해제가 동작.
---
@@ -222,23 +224,46 @@ This document tracks implementation progress against the HopGate architecture an
---
### 3.3 Proxy Core / HTTP Tunneling
### 3.3 Proxy Core / gRPC Tunneling
- [x] 서버 측 Proxy 구현 확장: [`internal/proxy/server.go`](internal/proxy/server.go)
- HTTP/HTTPS 리스너와 DTLS 세션 매핑 구현.
- `Router` 구현체 추가 (도메인/패스 → 클라이언트/서비스).
- 요청/응답을 `internal/protocol` 구조체로 직렬화/역직렬화.
HopGate 의 최종 목표는 **TCP + TLS(HTTPS) + HTTP/2 + gRPC** 기반 터널로 HTTP 트래픽을 전달하는 것입니다.
이 섹션에서는 DTLS 기반 초기 설계를 정리만 남기고, 실제 구현/남은 작업은 gRPC 터널 기준으로 재정의합니다.
- [x] 클라이언트 측 Proxy 구현 확장: [`internal/proxy/client.go`](internal/proxy/client.go)
- DTLS 세션에서 `protocol.Request` 수신 → 로컬 HTTP 호출 → `protocol.Response` 전송 루프 구현.
- timeout/취소/에러 처리.
- [x] 서버 측 gRPC 터널 엔드포인트 설계/구현
- 외부 사용자용 HTTPS(443/TCP)와 같은 포트에서:
- 일반 HTTP 요청(브라우저/REST)은 기존 리버스 프록시 경로로,
- `Content-Type: application/grpc` 인 요청은 클라이언트 터널용 gRPC 서버로
라우팅하는 구조를 설계합니다.
- 예시: `rpc OpenTunnel(stream TunnelFrame) returns (stream TunnelFrame)` (bi-directional streaming).
- HTTP/2 + ALPN(h2)을 사용해 gRPC 스트림을 유지하고, 요청/응답 HTTP 메시지를 `TunnelFrame`으로 멀티플렉싱합니다.
- [x] 서버 main 에 Proxy wiring 추가: [`cmd/server/main.go`](cmd/server/main.go)
- DTLS handshake 완료된 세션을 Proxy 라우팅 테이블에 등록.
- HTTPS 서버와 Proxy 핸들러 연결.
- [x] 클라이언트 측 gRPC 터널 설계/구현
- 클라이언트 프로세스는 HopGate 서버로 장기 유지 bi-di gRPC 스트림을 **하나(또는 소수 개)** 연 상태로 유지합니다.
- 서버로부터 들어오는 `TunnelFrame`(요청 메타데이터 + 바디 chunk)을 수신해,
로컬 HTTP 서비스(예: `127.0.0.1:8080`)로 proxy 하고, 응답을 다시 `TunnelFrame` 시퀀스로 전송합니다.
- 기존 `internal/proxy/client.go` 의 HTTP 매핑/스트림 ARQ 경험을, gRPC 메시지 단위 chunk/flow-control 설계에 참고합니다.
- [x] 클라이언트 main 에 Proxy loop wiring 추가: [`cmd/client/main.go`](cmd/client/main.go)
- handshake 성공 후 `proxy.ClientProxy.StartLoop` 실행.
- [x] HTTP ↔ gRPC 터널 매핑 규약 정의
- 한 HTTP 요청/응답 쌍을 gRPC 스트림 상에서 어떻게 표현할지 스키마를 정의합니다:
- 요청: `StreamID`, method, URL, headers, body chunks
- 응답: `StreamID`, status, headers, body chunks, error
- 현재 `internal/protocol/protocol.go`의 논리 모델(Envelope/StreamOpen/StreamData/StreamClose/StreamAck)을
gRPC 메시지(oneof 필드 등)로 직렬화할지, 또는 새로운 gRPC 전용 메시지를 정의할지 결정합니다.
- Back-pressure / flow-control 은 gRPC/HTTP2의 스트림 flow-control 을 최대한 활용하고,
추가 application-level windowing 이 필요하면 최소한으로만 도입합니다.
- [ ] gRPC 터널 기반 E2E 플로우 정의/테스트 계획
- 하나의 gRPC 스트림 위에서:
- 동시에 여러 정적 리소스(`/css`, `/js`, `/img`) 요청,
- 큰 응답(수 MB 파일)과 작은 응답(API JSON)이 섞여 있는 시나리오,
- 클라이언트 재시작/네트워크 단절 후 재연결 시나리오
를 포함하는 테스트 플랜을 작성합니다.
- 기대 동작:
- 느린 요청이 있더라도 다른 요청이 **같은 TCP 연결/스트림 집합 내에서** 과도하게 지연되지 않을 것.
- 서버/클라이언트 로그에 프로토콜 위반 경고(`unexpected frame ...`)가 발생하지 않을 것.
> Note: 기존 DTLS 기반 스트림/ARQ/멀티플렉싱(3.3A/3.3B)의 작업 내역은
> 구현 경험/아이디어 참고용으로만 유지하며, 신규 기능/운영 계획은 gRPC 터널을 기준으로 진행합니다.
---
@@ -261,10 +286,10 @@ This document tracks implementation progress against the HopGate architecture an
### 3.5 Observability / 관측성
- [ ] Prometheus 메트릭 노출 및 서버 wiring
- [x] Prometheus 메트릭 노출 및 서버 wiring
- `cmd/server/main.go` 에 Prometheus `/metrics` 엔드포인트 추가 (예: promhttp.Handler).
- DTLS 세션 수, DTLS 핸드셰이크 성공/실패 수, HTTP/Proxy 요청 수 및 에러 수에 대한 카운터/게이지 메트릭 정의.
- 도메인, 클라이언트 ID, request_id 등의 라벨 설계 및 현재 구조적 로 필드와 일관성 유지.
- DTLS 핸드셰이크 성공/실패 수, HTTP 요청 수, HTTP 요청 지연, Proxy 에러 수에 대한 메트릭 정의합니다.
- 메트릭 라벨은 메서드/상태 코드/결과/에러 타입 등에 한정되며, 도메인/클라이언트 ID/request_id 구조적 로 필드로만 노출됩니다.
- [ ] Loki/Grafana 대시보드 및 쿼리 예시
- Loki/Promtail 구성을 가정한 주요 로그 쿼리 예시 정리(도메인, 클라이언트 ID, request_id 기준).
@@ -274,7 +299,7 @@ This document tracks implementation progress against the HopGate architecture an
### 3.6 Hardening / 안정성 & 구성
- [ ] 설정 유효성 검사 추가
- [x] 설정 유효성 검사 추가
- 필수 env 누락/오류에 대한 명확한 에러 메시지.
- [ ] 에러 처리/재시도 정책
@@ -301,8 +326,10 @@ This document tracks implementation progress against the HopGate architecture an
### Milestone 2 — Full HTTP Tunneling (프락시 동작 완성)
- [ ] 서버 Proxy 코어 구현 및 HTTPS ↔ DTLS 라우팅.
- [ ] 클라이언트 Proxy 루프 구현 및 로컬 서비스 연동.
- [x] 서버 Proxy 코어 구현 및 HTTPS ↔ DTLS 라우팅.
- 현재 `cmd/server/main.go``newHTTPHandler` / `dtlsSessionWrapper.ForwardHTTP` 경로에서 동작합니다.
- [x] 클라이언트 Proxy 루프 구현 및 로컬 서비스 연동.
- `cmd/client/main.go` + [`ClientProxy.StartLoop()`](internal/proxy/client.go:59) 를 통해 DTLS 세션 위에서 로컬 서비스와 연동됩니다.
- [ ] End-to-end HTTP 요청/응답 터널링 E2E 테스트.
### Milestone 3 — ACME + TLS/DTLS 정식 인증
@@ -314,6 +341,9 @@ This document tracks implementation progress against the HopGate architecture an
### Milestone 4 — Observability & Hardening
- [ ] Prometheus/Loki/Grafana 통합.
- Prometheus 메트릭 정의 및 `/metrics` 엔드포인트는 이미 구현 및 동작 중이며,
Loki/Promtail/Grafana 대시보드 및 운영 통합 작업은 아직 남아 있습니다.
- [ ] 에러/리트라이/타임아웃 정책 정교화.
- [ ] 보안/구성 최종 점검 및 문서화.

197
protocol.md Normal file
View File

@@ -0,0 +1,197 @@
# HopGate gRPC Tunnel Protocol
이 문서는 HopGate 서버–클라이언트 사이의 gRPC 기반 HTTP 터널링 규약을 정리합니다. (ko)
This document describes the gRPC-based HTTP tunneling protocol between HopGate server and clients. (en)
## 1. Transport Overview / 전송 개요
- Transport: TCP + TLS(HTTPS) + HTTP/2 + gRPC
- Single long-lived bi-directional gRPC stream per client: `OpenTunnel`
- Application payload type: `Envelope` (from `internal/protocol/hopgate_stream.proto`)
- HTTP requests/responses are multiplexed as logical streams identified by `StreamID`.
gRPC service (conceptual):
```proto
service HopGateTunnel {
rpc OpenTunnel (stream Envelope) returns (stream Envelope);
}
```
## 2. Message Types / 메시지 타입
Defined in `internal/protocol/hopgate_stream.proto`:
- `HeaderValues`
- Wraps repeated header values: `map<string, HeaderValues>`
- `Request` / `Response`
- Simple single-message HTTP representation (not used in the streaming tunnel path initially).
- `StreamOpen`
- Opens a new logical stream for HTTP request/response (or other protocols in the future).
- `StreamData`
- Carries body chunks for a stream (`id`, `seq`, `data`).
- `StreamClose`
- Marks the end of a stream (`id`, `error`).
- `StreamAck`
- Legacy ARQ/flow-control hint for UDP/DTLS; in gRPC tunnel it is reserved/optional.
- `Envelope`
- Top-level container with `oneof payload` of the above types.
In the gRPC tunnel, `Envelope` is the only gRPC message type used on the `OpenTunnel` stream.
## 3. Logical Streams and StreamID / 논리 스트림과 StreamID
- A single `OpenTunnel` gRPC stream multiplexes many **logical streams**.
- Each logical stream corresponds to one HTTP request/response pair.
- Logical streams are identified by `StreamOpen.id` (text StreamID).
- The server generates unique IDs per gRPC connection:
- HTTP streams: `"http-{n}"` where `n` is a monotonically increasing counter.
- Control stream: `"control-0"` (special handshake/metadata stream).
Within a gRPC connection:
- Multiple `StreamID`s may be active concurrently.
- Frames with different StreamIDs may be arbitrarily interleaved.
- Order within a stream is tracked by `StreamData.seq` (starting at 0).
## 4. HTTP Request Mapping (Server → Client) / HTTP 요청 매핑
When the public HTTPS reverse-proxy (`cmd/server/main.go`) receives an HTTP request for a domain that is bound
to a client tunnel, it serializes the request into a logical stream as follows.
### 4.1 StreamOpen (request metadata and headers)
- `StreamOpen.id`
- New unique StreamID: `"http-{n}"`.
- `StreamOpen.service_name`
- Logical service selection on the client (e.g., `"web"`).
- `StreamOpen.target_addr`
- Optional explicit local target address on the client (e.g., `"127.0.0.1:8080"`).
- `StreamOpen.header`
- Contains HTTP request headers and pseudo-headers:
- Pseudo-headers:
- `X-HopGate-Method`: HTTP method (e.g., `"GET"`, `"POST"`).
- `X-HopGate-URL`: original URL path + query (e.g., `"/api/v1/foo?bar=1"`).
- `X-HopGate-Host`: Host header value.
- Other keys:
- All remaining HTTP headers from the incoming request, copied as-is into the map.
### 4.2 StreamData* (request body chunks)
- If the request has a body, the server chunks it into fixed-size pieces.
- Chunk size: `protocol.StreamChunkSize` (currently 4 KiB).
- For each chunk:
- `StreamData.id = StreamOpen.id`
- `StreamData.seq` increments from 0, 1, 2, …
- `StreamData.data` contains the raw bytes.
### 4.3 StreamClose (end of request body)
- After sending all body chunks, the server sends a `StreamClose`:
- `StreamClose.id = StreamOpen.id`
- `StreamClose.error` is empty on success.
- If there was an application-level error while reading the body, `error` contains a short description.
The client reconstructs the HTTP request by:
- Reassembling the URL and headers from the `StreamOpen` pseudo-headers and header map.
- Concatenating `StreamData.data` in `seq` order into the request body.
- Treating `StreamClose` as the end-of-stream marker.
## 5. HTTP Response Mapping (Client → Server) / HTTP 응답 매핑
The client receives `StreamOpen` + `StreamData*` + `StreamClose`, performs a local HTTP request to its
configured target (e.g., `http://127.0.0.1:8080`), then returns an HTTP response using the same StreamID.
### 5.1 StreamOpen (response headers and status)
- `StreamOpen.id`
- Same as the request StreamID.
- `StreamOpen.header`
- Contains response headers and a pseudo-header for status:
- Pseudo-header:
- `X-HopGate-Status`: HTTP status code as a string (e.g., `"200"`, `"502"`).
- Other keys:
- All HTTP response headers from the local backend, copied as-is.
### 5.2 StreamData* (response body chunks)
- The client reads the local HTTP response body and chunks it into 4 KiB pieces (same `StreamChunkSize`).
- For each chunk:
- `StreamData.id = StreamOpen.id`
- `StreamData.seq` increments from 0.
- `StreamData.data` contains the raw bytes.
### 5.3 StreamClose (end of response body)
- When the local backend response is fully read, the client sends a `StreamClose`:
- `StreamClose.id` is the same StreamID.
- `StreamClose.error`:
- Empty string on success.
- Short error description if the local HTTP request/response failed (e.g., connect timeout).
The server reconstructs the HTTP response by:
- Parsing `X-HopGate-Status` into an integer HTTP status code.
- Copying other headers into the outgoing response writer (with some security headers overridden by the server).
- Concatenating `StreamData.data` in `seq` order into the HTTP response body.
- Considering `StreamClose.error` for logging/metrics and possibly mapping to error pages if needed.
## 6. Control / Handshake Stream / 컨트롤 스트림
Before any HTTP request streams are opened, the client sends a single **control stream** to authenticate
and describe itself.
- `StreamOpen` (control):
- `id = "control-0"`
- `service_name = "control"`
- `header` contains:
- `X-HopGate-Domain`: domain this client is responsible for.
- `X-HopGate-API-Key`: client API key for the domain.
- `X-HopGate-Local-Target`: default local target such as `127.0.0.1:8080`.
- No `StreamData` is required for the control stream in the initial design.
- The server can optionally reply with its own control `StreamOpen/Close` to signal acceptance/rejection.
On the server side:
- `grpcTunnelServer.OpenTunnel` should:
1. Wait for the first `Envelope` with `StreamOpen.id == "control-0"`.
2. Extract domain, api key, and local target from the headers.
3. Call the ent-based `DomainValidator` to validate `(domain, api_key)`.
4. If validation succeeds, register this gRPC stream as the active tunnel for that domain.
5. If validation fails, log and close the gRPC stream.
Once the control stream handshake completes successfully, the server may start multiplexing multiple
HTTP request streams (`http-0`, `http-1`, …) over the same `OpenTunnel` connection.
## 7. Multiplexing Semantics / 멀티플렉싱 의미
- A single TCP + TLS + HTTP/2 + gRPC connection carries:
- One long-lived `OpenTunnel` gRPC bi-di stream.
- Within it, many logical streams identified by `StreamID`.
- The server can open multiple HTTP streams concurrently for a given client:
- Example: `http-0` for `/css/app.css`, `http-1` for `/api/users`, `http-2` for `/img/logo.png`.
- Frames for these IDs can interleave arbitrarily on the wire.
- Per-stream ordering is preserved by combining `seq` ordering and the reliability of TCP/gRPC.
- Slow or large responses on one stream should not prevent other streams from making progress,
because gRPC/HTTP2 handles stream-level flow control and scheduling.
## 8. Flow Control and StreamAck / 플로우 컨트롤 및 StreamAck
- The gRPC tunnel runs over TCP/HTTP2, which already provides:
- Reliable, in-order delivery.
- Connection-level and stream-level flow control.
- Therefore, application-level selective retransmission is **not required** for the gRPC tunnel.
- `StreamAck` remains defined in the proto for backward compatibility with the DTLS design and
as a potential future hint channel (e.g., window size hints), but is not used in the initial gRPC tunnel.
## 9. Security Considerations / 보안 고려사항
- TLS:
- In production, the server uses ACME-issued certificates, and clients validate the server certificate
using system Root CAs and SNI (`ServerName`).
- In debug mode, clients may use `InsecureSkipVerify: true` to allow local/self-signed certs.
- Authentication:
- Application-level authentication relies on `(domain, api_key)` pairs sent via the control stream headers.
- The server must validate these pairs against the `Domain` table using `DomainValidator`.
- Authorization and isolation:
- Each gRPC tunnel is bound to a single domain (or a defined set of domains) after successful control handshake.
- HTTP requests for other domains must not be forwarded over this tunnel.
이 규약을 기준으로 서버/클라이언트 구현을 정렬하면, 하나의 gRPC `OpenTunnel` 스트림 위에서
여러 HTTP 요청을 안정적으로 멀티플렉싱하면서도, 도메인/API 키 기반 인증과 TLS 보안을 함께 유지할 수 있습니다.

57
tools/build_server_image.sh Executable file
View File

@@ -0,0 +1,57 @@
#!/bin/sh
# POSIX sh 버전의 hop-gate 서버 이미지 빌드 스크립트.
# VERSION 은 현재 git 커밋의 7글자 SHA 를 사용합니다.
set -eu
# 스크립트 위치 기준 리포 루트 계산
SCRIPT_DIR=$(cd "$(dirname "$0")" >/dev/null 2>&1 && pwd)
REPO_ROOT="${SCRIPT_DIR}/.."
cd "${REPO_ROOT}"
# 현재 커밋 7글자 SHA, git 정보가 없으면 dev
VERSION=$(git rev-parse --short=7 HEAD 2>/dev/null || echo dev)
# 기본 이미지 이름 (첫 번째 인자로 override 가능)
# 예:
# ./tools/build_server_image.sh
# ./tools/build_server_image.sh my/image/name
IMAGE_NAME=${1:-ghcr.io/dalbodeule/hop-gate}
echo "Building hop-gate server image"
echo " context : ${REPO_ROOT}"
echo " image : ${IMAGE_NAME}:${VERSION}"
echo " version : ${VERSION}"
# docker buildx 사용 가능 여부 확인
if command -v docker >/dev/null 2>&1 && docker buildx version >/dev/null 2>&1; then
BUILD_CMD="docker buildx build"
else
BUILD_CMD="docker build"
fi
# 선택적 환경 변수:
# PLATFORM=linux/amd64,linux/arm64 # buildx 용
# PUSH=1 # buildx --push
PLATFORM_ARGS=""
if [ "${PLATFORM-}" != "" ]; then
PLATFORM_ARGS="--platform ${PLATFORM}"
fi
PUSH_ARGS=""
if [ "${PUSH-}" != "" ]; then
PUSH_ARGS="--push"
fi
# 실제 빌드 실행
# shellcheck disable=SC2086
${BUILD_CMD} \
${PLATFORM_ARGS} \
-f Dockerfile.server \
--build-arg VERSION="${VERSION}" \
-t "${IMAGE_NAME}:${VERSION}" \
-t "${IMAGE_NAME}:latest" \
${PUSH_ARGS} \
.