Introduction
Welcome to the rsketch documentation! This is a Rust project template that includes modern tooling for gRPC API development and multi-language code generation.
What’s Included
- Rust Workspace: Organized crate structure with API, server, command-line, and common utilities
- gRPC/Protobuf: Protocol Buffer definitions with Buf integration
- Multi-language Code Generation: Generate gRPC stubs for Go, Java, C++, Python, TypeScript, and more
- Development Tools: Just recipes, automated formatting, linting, and testing
- CI/CD: GitHub Actions with comprehensive testing and documentation deployment
- Documentation: MDBook-based documentation with automatic deployment
Quick Start
-
Clone and Setup:
git clone <your-repo> cd rsketch -
Install Dependencies:
# Install Rust if you haven't already curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Install additional tools cargo install just mdbook # Install buf for protobuf # See: https://docs.buf.build/installation -
Build and Test:
just build just test -
Generate Multi-language Code:
just proto-generate # All languages just proto-generate-go # Go only just proto-generate-java # Java only
Project Structure
rsketch/
├── api/ # Protocol Buffer definitions and generated code
├── crates/ # Rust workspace crates
│ ├── cmd/ # Command-line interface
│ ├── common/ # Shared utilities
│ └── server/ # gRPC server implementation
├── docs/ # Documentation (this site)
└── examples/ # Usage examples
Next Steps
- API Guide - Learn about the gRPC API structure
- Buf Integration - Multi-language code generation setup
API Module
This directory contains all Protocol Buffer definitions and generated gRPC stubs for the rsketch project.
Architecture
This project uses a client-server architecture:
- Rust: Implements the gRPC server with both client and server code generated
- Other languages: Generate client-only code to consume the Rust gRPC API
This approach allows you to:
- Build the main service in Rust (performance, safety)
- Create client libraries for multiple languages (ecosystem integration)
- Maintain a single source of truth for the API definition
Directory Structure
api/
├── proto/ # Protocol Buffer definitions
│ └── hello/
│ └── v1/
│ └── hello.proto
├── gen/ # Generated code (git-ignored)
│ ├── go/ # Go client libraries
│ ├── java/ # Java client libraries
│ ├── cpp/ # C++ client libraries
│ ├── c/ # C client libraries (requires local setup)
│ ├── python/ # Python client libraries
│ ├── typescript/ # TypeScript client libraries
│ └── rust/ # Rust server + client code
├── buf.yaml # Main Buf configuration
├── buf.gen.yaml # Multi-language generation config
├── buf.gen.go.yaml # Go-specific generation
├── buf.gen.java.yaml # Java-specific generation
├── buf.gen.cpp.yaml # C++-specific generation
├── buf.gen.c.yaml # C-specific generation
├── buf.lock # Dependency lock file
├── build.rs # Rust build script (for tonic/prost)
├── Cargo.toml # Rust crate configuration
└── src/ # Rust API crate source
└── lib.rs
Available Commands
All commands should be run from the project root:
Code Generation
# Generate code for all languages
just proto-generate
# Generate for specific languages
just proto-generate-go
just proto-generate-java
just proto-generate-cpp
just proto-generate-c # Requires local protobuf-c setup
# Generate directly with buf (from api/ directory)
cd api && buf generate
cd api && buf generate --template buf.gen.go.yaml
Validation
# Lint proto files
just proto-lint
# Format proto files
just proto-format
# Check for breaking changes (in CI/PR)
just proto-breaking
Dependency Management
# Update proto dependencies
just proto-deps-update
Language-Specific Setup
Go
Generated Go code is in gen/go/ with proper go_package options:
import hellov1 "github.com/crrow/rsketch/gen/go/hello/v1"
Java
Generated Java code is in gen/java/ with package com.rsketch.api.hello.v1:
import com.rsketch.api.hello.v1.HelloProto;
import com.rsketch.api.hello.v1.HelloGrpc;
C++
Generated C++ code is in gen/cpp/:
#include "hello/v1/hello.grpc.pb.h"
#include "hello/v1/hello.pb.h"
Rust
The Rust crate is built with tonic and prost, available as:
#![allow(unused)]
fn main() {
use rsketch_api::pb::hello::v1::*;
}
Adding New Services
- Create a new
.protofile in the appropriate directory underproto/ - Add language-specific options (go_package, java_package, etc.)
- Update
build.rsto include the new proto file for Rust compilation - Run code generation:
just proto-generate - Update your application code to use the new generated stubs
Buf Schema Registry
To publish schemas to the Buf Schema Registry:
- Configure your module name in
buf.yaml - Authenticate:
buf registry login - Push:
just proto-push
See the main project documentation for more details on buf integration.
Buf Integration Guide
This project uses Buf for managing Protocol Buffer files and generating gRPC stubs for multiple languages.
Overview
Buf provides:
- Linting: Ensures your proto files follow best practices
- Breaking change detection: Prevents breaking API changes
- Code generation: Generate gRPC client libraries for multiple languages
- Dependency management: Manages proto dependencies
Code Generation Strategy
This project is configured to generate:
- Rust: Both client and server code (for the backend service)
- Other languages (Go, Java, C++, Python, TypeScript): Client code only (for consuming the API)
- C: Requires local setup (see C section below)
Setup
Prerequisites
- Install Buf CLI:
# macOS
brew install bufbuild/buf/buf
# Linux
curl -sSL https://github.com/bufbuild/buf/releases/latest/download/buf-Linux-x86_64.tar.gz | tar -xvzf - -C /usr/local --strip-components 1
# Windows
choco install buf
- Verify installation:
buf --version
Configuration Files
All buf configuration files are located in the api/ directory:
api/buf.yaml: Main configuration file that defines modules and linting rulesapi/buf.gen.yaml: General code generation configuration for all languagesapi/buf.gen.go.yaml: Go-specific code generationapi/buf.gen.java.yaml: Java-specific code generationapi/buf.gen.cpp.yaml: C++-specific code generationapi/buf.lock: Dependency lock file (auto-generated)
Available Commands
Development Commands (via just)
# Lint proto files
just proto-lint
# Format proto files
just proto-format
# Check for breaking changes
just proto-breaking
# Generate code for all languages
just proto-generate
# Generate code for specific languages
just proto-generate-go
just proto-generate-java
just proto-generate-cpp
just proto-generate-c # Requires local protobuf-c setup
# Update dependencies
just proto-deps-update
# Push to Buf Schema Registry (BSR)
just proto-push
Direct Buf Commands
# Lint
buf lint
# Format (with write)
buf format -w
# Generate all
buf generate
# Generate for specific language
buf generate --template buf.gen.go.yaml
# Breaking change detection
buf breaking --against .git#branch=main
# Update dependencies
buf dep update
# Push to BSR
buf push
Generated Code Structure
Generated code will be placed in the api/gen/ directory:
api/
├── proto/ # Protocol buffer definitions
│ └── hello/
│ └── v1/
│ └── hello.proto
├── gen/ # Generated code for all languages
│ ├── go/ # Go gRPC stubs
│ ├── java/ # Java gRPC stubs
│ ├── cpp/ # C++ gRPC stubs
│ ├── python/ # Python gRPC stubs
│ ├── typescript/ # TypeScript/JavaScript stubs
│ └── rust/ # Rust gRPC stubs
├── buf.yaml # Buf configuration
├── buf.gen.yaml # Code generation config
├── buf.gen.go.yaml # Go-specific generation
├── buf.gen.java.yaml # Java-specific generation
├── buf.gen.cpp.yaml # C++-specific generation
└── buf.lock # Dependency lock file
Using Generated Code
Go Client Example
package main
import (
"context"
"log"
hellov1 "github.com/crrow/rsketch/gen/go/hello/v1"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/protobuf/types/known/emptypb"
)
func main() {
// Connect to the Rust gRPC server
conn, err := grpc.Dial("localhost:50051", grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
log.Fatal(err)
}
defer conn.Close()
// Create client (generated code contains only client)
client := hellov1.NewHelloServiceClient(conn)
resp, err := client.Hello(context.Background(), &emptypb.Empty{})
if err != nil {
log.Fatal(err)
}
log.Printf("Response: %v", resp)
}
Java Client Example
import com.rsketch.api.hello.v1.HelloGrpc;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import com.google.protobuf.Empty;
public class HelloClient {
public static void main(String[] args) {
// Connect to the Rust gRPC server
ManagedChannel channel = ManagedChannelBuilder
.forAddress("localhost", 50051)
.usePlaintext()
.build();
// Create client (generated code contains only client)
HelloGrpc.HelloBlockingStub stub =
HelloGrpc.newBlockingStub(channel);
Empty response = stub.hello(Empty.getDefaultInstance());
System.out.println("Response: " + response);
channel.shutdown();
}
}
C++ Client Example
#include <grpcpp/grpcpp.h>
#include "hello/v1/hello.grpc.pb.h"
#include <google/protobuf/empty.pb.h>
using grpc::Channel;
using grpc::ClientContext;
using grpc::Status;
using rsketch::hello::v1::Hello;
using google::protobuf::Empty;
class HelloClient {
public:
HelloClient(std::shared_ptr<Channel> channel)
: stub_(Hello::NewStub(channel)) {}
void SayHello() {
Empty request;
Empty response;
ClientContext context;
// Call the Rust gRPC server
Status status = stub_->Hello(&context, request, &response);
if (status.ok()) {
std::cout << "Hello successful" << std::endl;
} else {
std::cout << "Hello failed: " << status.error_message() << std::endl;
}
}
private:
std::unique_ptr<Hello::Stub> stub_; // Client stub only
};
int main() {
// Connect to the Rust gRPC server
auto channel = grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials());
HelloClient client(channel);
client.SayHello();
return 0;
}
C Client Setup & Example
C gRPC support requires local installation of protobuf-c and gRPC-C libraries:
# Install protobuf-c (Ubuntu/Debian)
sudo apt-get install libprotobuf-c-dev protobuf-c-compiler
# Install protobuf-c (macOS with Homebrew)
brew install protobuf-c
# Install gRPC-C (build from source)
git clone https://github.com/grpc/grpc
cd grpc
git submodule update --init
mkdir -p cmake/build
cd cmake/build
cmake ../..
make grpc
Generate C code (requires local setup):
just proto-generate-c
Example C client (conceptual - actual implementation depends on gRPC-C setup):
#include <grpc-c/grpc-c.h>
#include "hello/v1/hello.pb-c.h"
int main() {
// Initialize gRPC-C
grpc_c_init(GRPC_C_TYPE_CLIENT, NULL);
// Create client context
grpc_c_context_t *context = grpc_c_context_init(NULL, 0);
// Connect to Rust gRPC server
grpc_c_client_t *client = grpc_c_client_init("localhost:50051", NULL, NULL);
// Call Hello service (implementation depends on generated C code)
// Note: Actual API depends on protobuf-c and gRPC-C generated code
// Cleanup
grpc_c_client_free(client);
grpc_c_context_free(context);
return 0;
}
Note: C gRPC implementation is more complex than other languages. Consider using C++ bindings with C wrappers for easier integration.
CI/CD Integration
The project includes automated buf validation in GitHub Actions:
- Linting: Validates proto file style and best practices
- Format checking: Ensures consistent formatting
- Breaking change detection: Prevents breaking changes in PRs
Buf Schema Registry (BSR)
To publish your schemas to BSR:
- Create an account at buf.build
- Create a new repository
- Update
buf.yamlwith your BSR module name - Authenticate:
buf registry login - Push schemas:
just proto-push
Best Practices
- Always run linting before committing proto changes
- Use breaking change detection for API evolution
- Version your APIs using semantic versioning in proto packages
- Generate code regularly to catch compilation issues early
- Keep generated code in .gitignore - regenerate as needed
Troubleshooting
Common Issues
- Buf command not found: Ensure buf is installed and in PATH
- Linting errors: Run
buf lintto see specific issues - Breaking changes: Use
buf breakingto identify breaking changes - Generation failures: Check plugin versions in buf.gen.yaml files
Getting Help
OpenTelemetry Integration
This document describes how to use the OpenTelemetry integration in rsketch to monitor the performance of the HTTP and gRPC servers.
Configuration
The OpenTelemetry integration is configured through environment variables. The following variables are available:
OTEL_EXPORTER_OTLP_ENDPOINT: The endpoint of the OpenTelemetry collector. Defaults tohttp://localhost:4317.OTEL_SERVICE_NAME: The name of the service. Defaults torsketch.
Usage
To enable the OpenTelemetry integration, simply start the rsketch server:
cargo run --bin rsketch -- server
The server will automatically start exporting traces and metrics to the configured OpenTelemetry collector.
Kubernetes Integration
To integrate with the Kubernetes infrastructure described in the k8s/ directory, you will need to deploy an OpenTelemetry collector to your cluster. The collector can be configured to export data to a variety of backends, such as Jaeger, Prometheus, or a cloud-based observability platform.
Here is an example of a simple OpenTelemetry collector configuration that exports data to Jaeger:
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-conf
labels:
app: opentelemetry
component: otel-collector-conf
data:
otel-collector-config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger]
This configuration creates a ConfigMap that contains the OpenTelemetry collector configuration. The collector is configured to receive data over OTLP (gRPC and HTTP) and export it to a Jaeger instance running in the cluster.
To deploy the collector, you can use the following Kubernetes manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
labels:
app: opentelemetry
component: otel-collector
spec:
replicas: 1
selector:
matchLabels:
app: opentelemetry
component: otel-collector
template:
metadata:
labels:
app: opentelemetry
component: otel-collector
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector:0.84.0
command:
- "--config=/conf/otel-collector-config.yaml"
volumeMounts:
- name: otel-collector-config-vol
mountPath: /conf
ports:
- name: otlp-grpc
containerPort: 4317
- name: otlp-http
containerPort: 4318
volumes:
- name: otel-collector-config-vol
configMap:
name: otel-collector-conf
This manifest creates a Deployment that runs the OpenTelemetry collector. The collector is configured to use the ConfigMap created in the previous step.
Once the collector is deployed, you will need to configure the rsketch server to export data to the collector. You can do this by setting the OTEL_EXPORTER_OTLP_ENDPOINT environment variable to the address of the collector’s OTLP gRPC endpoint.
For example, if the collector is running in the default namespace, you can set the environment variable to http://otel-collector.default:4317.