The data server provides the read API for the KV service.
-
Install Bazelisk is the recommended way to install Bazel or refer to Bazelisk's installation instructions. Hint: If you have Go installed, then
go install
may be the simplest option, per these instructions. on linux, Windows, and macOS.Note: These instructions refer to the
bazel
command. Therefore, if using bazelisk instead, simply specifybazelisk
instead ofbazel
, add an alias forbazel
, or symlinkbazelisk
tobazel
and add this to yourPATH
.If installing bazel directly rather than using bazelisk, refer to the
.bazeliskrc
file for the matching bazel version. -
Optional steps:
bazel build @com_github_grpc_grpc//test/cpp/util:grpc_cli
cp "$(bazel info bazel-bin)/external/com_github_grpc_grpc/test/cpp/util/grpc_cli" /bin/opt/grpc_cli
For example:
bazel run //components/data_server/server:server --//:parameters=local --//:platform=aws -- --environment="dev"
Attention: The server can run locally while specifying
aws
as platform, in which case it will contact AWS based on the local AWS credentials. However, this requires the AWS environment to be set up first following the AWS deployment guide.
We are currently developing this server for local testing and for use on AWS Nitro instances (similar to the Aggregation Service. We anticipate supporting additional cloud providers in the future.
Use grpc_cli
to interact with your local instance. You might have to pass
--channel_creds_type=insecure
.
Example:
grpc_cli call localhost:50051 GetValues "kv_internal: 'hi'" --channel_creds_type=insecure
The KV service instance should be set up by following the deployment guide
(AWS). For faster iteration, enclave image of the server is also
produced under dist/
. Once the system has been started, iterating on changes to the server itself
only requires restarting the enclave image:
-
Copy the new enclave EIF to an AWS EC2 instance that supports nitro enclave. Note: The system has a SSH instance that a developer can access. From there the user can access actual server EC2 instances, using the same SSH key. So the copy command below should be repeated twice to reach the destination EC2 instance.
scp -i ~/"key.pem" dist/server_enclave_image.eif ec2-user@${EC2_ADDR}.compute-1.amazonaws.com:/home/ec2-user/server_enclave_image.eif
-
Start the enclave job (If one is running, terminate it first, see below for instructions):
nitro-cli run-enclave --cpu-count 2 --memory 30720 --eif-path server_enclave_image.eif --debug-mode --enclave-cid 16
-
To see logs of the TEE job:
ENCLAVE_ID=$(nitro-cli describe-enclaves | jq -r ".[0].EnclaveID"); [ "$ENCLAVE_ID" != "null" ] && nitro-cli console --enclave-id ${ENCLAVE_ID}
-
To terminate the job:
ENCLAVE_ID=$(nitro-cli describe-enclaves | jq -r ".[0].EnclaveID"); [ "$ENCLAVE_ID" != "null" ] && nitro-cli terminate-enclave --enclave-id ${ENCLAVE_ID}
It's possible to use polymorphism + build-time flag to only build and link code specific to a platform.
Example:
cc_library(
name = "blob_storage_client",
srcs = select({
"//:aws_platform": ["s3_blob_storage_client.cc"],
}),
hdrs = [
"blob_storage_client.h",
],
deps = select({
"//:aws_platform": ["@aws_sdk_cpp//:s3"],
}) + [
"@com_google_absl//absl/status",
"@com_google_absl//absl/status:statusor",
],
)
Available conditions are:
- //:aws_platform
- //:local_platform
Parameters can be configured separately to be read from specific platforms.
- //:aws_parameters
- //:local_parameters