Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proton server --daemon does not work #818

Open
jhao0117 opened this issue Aug 14, 2024 · 0 comments
Open

proton server --daemon does not work #818

jhao0117 opened this issue Aug 14, 2024 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@jhao0117
Copy link
Collaborator

  1. brew install timeplus-io/timeplus/proton on mac sussfully
  2. run proton server --help, could see --daemon opton
  3. try proton server, run successfully
jameshao@JamesdeMacBook-Pro ~ % proton server
Processing configuration file 'config.xml'.
There is no file 'config.xml', will use embedded config.
Env variable is not set: TELEMETRY_ENABLED
2024.08.14 10:51:23.684623 [ 14450641 ] {} <Information> : Starting proton 1.5.16 (revision: 20, git hash: f45e9091885e5c03741b0ebd031e943cec5fd665, build id: <unknown>), PID 51057
2024.08.14 10:51:23.696490 [ 14450641 ] {} <Information> Application: starting up
2024.08.14 10:51:23.696542 [ 14450641 ] {} <Information> Application: OS name: Darwin, version: 23.5.0, architecture: x86_64
2024.08.14 10:51:23.710753 [ 14450641 ] {} <Warning> Context: Available memory at server startup is too low (2GiB).
2024.08.14 10:51:23.712557 [ 14450641 ] {} <Warning> Context: Maximum number of threads is lower than 30000. There could be problems with handling a lot of simultaneous queries.
2024.08.14 10:51:23.716133 [ 14450641 ] {} <Warning> KeeperLogStore: No logs exists in ./proton-data/coordination/logs. It's Ok if it's the first run of proton-keeper.
2024.08.14 10:51:23.716174 [ 14450641 ] {} <Information> KeeperLogStore: force_sync disabled
2024.08.14 10:51:23.716319 [ 14450641 ] {} <Warning> MetaStateMachine: Fail to get rocksdb column families list at: ./proton-data/coordination : IO error: No such file or directory: While opening a file for sequentially reading: ./proton-data/coordination/CURRENT: No such file or directory
2024.08.14 10:51:23.724822 [ 14450641 ] {} <Information> MetaStateMachine: Totally have 0 snapshots
2024.08.14 10:51:23.724864 [ 14450641 ] {} <Information> MetaStateMachine: last committed log index 0
2024.08.14 10:51:23.724874 [ 14450641 ] {} <Warning> KeeperLogStore: Removing all changelogs
2024.08.14 10:51:23.725292 [ 14450641 ] {} <Information> RaftInstance: Raft ASIO listener initiated on :::9445, unsecured
2024.08.14 10:51:23.725319 [ 14450641 ] {} <Warning> RaftInstance: invalid election timeout lower bound detected, adjusted to 0
2024.08.14 10:51:23.725326 [ 14450641 ] {} <Warning> RaftInstance: invalid election timeout upper bound detected, adjusted to 0
2024.08.14 10:51:23.725336 [ 14450641 ] {} <Information> RaftInstance: parameters: election timeout range 0 - 0, heartbeat 0, leadership expiry 0, max batch 100, backoff 50, snapshot distance 100000, enable randomized snapshot creation NO, log sync stop gap 99999, reserved logs 100000, client timeout 10000, auto forwarding on, API call type blocking, custom commit quorum size 0, custom election quorum size 0, snapshot receiver included, leadership transfer wait time 0, grace period of lagging state machine 0, snapshot IO: blocking, parallel log appending: off
2024.08.14 10:51:23.725348 [ 14450641 ] {} <Information> RaftInstance: new election timeout range: 0 - 0
2024.08.14 10:51:23.725366 [ 14450641 ] {} <Information> RaftInstance:    === INIT RAFT SERVER ===
commit index 0
term 0
election timer allowed
log store start 1, end 0
config log idx 0, prev log idx 0
2024.08.14 10:51:23.725436 [ 14450641 ] {} <Information> RaftInstance: peer 1: DC ID 0, localhost:9445, voting member, 1
my id: 1, voting_member
num peers: 0
2024.08.14 10:51:23.725451 [ 14450641 ] {} <Information> RaftInstance: global manager does not exist. will use local thread for commit and append
2024.08.14 10:51:23.725493 [ 14450641 ] {} <Information> RaftInstance: wait for HB, for 50 + [0, 0] ms
2024.08.14 10:51:23.725545 [ 14450730 ] {} <Information> RaftInstance: bg append_entries thread initiated
2024.08.14 10:51:23.776212 [ 14450713 ] {} <Warning> RaftInstance: Election timeout, initiate leader election
2024.08.14 10:51:23.776240 [ 14450713 ] {} <Information> RaftInstance: [PRIORITY] decay, target 1 -> 1, mine 1
2024.08.14 10:51:23.776255 [ 14450713 ] {} <Information> RaftInstance: [ELECTION TIMEOUT] current role: follower, log last term 0, state term 0, target p 1, my p 1, hb dead, pre-vote NOT done
2024.08.14 10:51:23.776271 [ 14450713 ] {} <Information> RaftInstance: [VOTE INIT] my id 1, my role candidate, term 1, log idx 0, log term 0, priority (target 1 / mine 1)
2024.08.14 10:51:23.776283 [ 14450713 ] {} <Information> RaftInstance: number of pending commit elements: 0
2024.08.14 10:51:23.776289 [ 14450713 ] {} <Information> RaftInstance: state machine commit index 0, precommit index 0, last log index 0
2024.08.14 10:51:23.776306 [ 14450713 ] {} <Information> RaftInstance: [BECOME LEADER] appended new config at 1
2024.08.14 10:51:23.776441 [ 14450729 ] {} <Information> RaftInstance: config at index 1 is committed, prev config log idx 0
2024.08.14 10:51:23.776454 [ 14450729 ] {} <Information> RaftInstance: new config log idx 1, prev log idx 0, cur config log idx 0, prev log idx 0
2024.08.14 10:51:23.776471 [ 14450729 ] {} <Information> RaftInstance: new configuration: log idx 1, prev log idx 0
peer 1, DC ID 0, localhost:9445, voting member, 1
my id: 1, leader: 1, term: 1
2024.08.14 10:51:23.776683 [ 14450641 ] {} <Information> Application: Listening for http://[::1]:9444
2024.08.14 10:51:23.776706 [ 14450641 ] {} <Information> Application: Listening for http://0.0.0.0:9444
Env variable is not set: TELEMETRY_ENABLED
2024.08.14 10:51:23.780330 [ 14450641 ] {} <Information> Application: Setting max_server_memory_usage was set to 28.80 GiB (32.00 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
Env variable is not set: TELEMETRY_ENABLED
2024.08.14 10:51:23.785606 [ 14450641 ] {} <Information> Context: Initialized background executor for merges and mutations with num_threads=16, num_tasks=32
2024.08.14 10:51:23.785780 [ 14450641 ] {} <Information> Context: Initialized background executor for move operations with num_threads=8, num_tasks=8
2024.08.14 10:51:23.786143 [ 14450641 ] {} <Information> Context: Initialized background executor for fetches with num_threads=8, num_tasks=8
2024.08.14 10:51:23.786295 [ 14450641 ] {} <Information> Context: Initialized background executor for common operations (e.g. clearing old parts) with num_threads=8, num_tasks=8
2024.08.14 10:51:23.786657 [ 14450641 ] {} <Information> Application: Loading metadata from ./proton-data/
2024.08.14 10:51:23.787976 [ 14450641 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 128 threads
2024.08.14 10:51:23.790778 [ 14450641 ] {} <Warning> ExternalGrokPatterns: External grok patterns file 'grok-patterns' does not exist, trying embedded resource
2024.08.14 10:51:23.790900 [ 14450641 ] {} <Warning> TelemetryCollector: Please note that telemetry is enabled. This is used to collect the version and runtime environment information to Timeplus, Inc. You can disable it by setting telemetry_enabled to false in config.yaml
2024.08.14 10:51:23.792094 [ 14450928 ] {} <Information> LocalFileSystemCheckpoint: Scanning delete-marked and expired checkpoints in checkpoint directory=./proton-data/checkpoint/, ttl=569800s
2024.08.14 10:51:23.792372 [ 14450641 ] {} <Information> NativeLog: NativeLog init with configs: meta_dir=./proton-data/nativelog/meta/ log_dirs=/Users/jameshao/proton-data/nativelog/log check_crcs=false max_schedule_threads=10 max_adhoc_schedule_threads=8 max_cached_entries=10000 max_cached_bytes=419430400 max_cached_entries_per_shard=100 max_cached_bytes_per_shard=4194304 max_wait_ms=500 max_bytes=65536 max_record_size=10485760 segment_size=1073741824 segment_ms=604800000 flush_interval_records=1000 flush_interval_ms=120000 retention_size=-1 retention_ms=604800000 file_delete_delay_ms=300000 max_index_size=10485760 index_interval_bytes=4096 index_interval_records=1000 preallocate=true recovery_threads_per_data_dir=1 flush_check_ms=4294967295 flush_recovery_sn_checkpoint_ms=60000 flush_start_sn_checkpoint_ms=60000 retention_check_ms=300000 num_threads=1 dedup_buffer_size=4194304 dedup_load_factor=0.9 io_buffer_size=1048576 max_io_bytes_per_second=1.7976931348623157e+308 backoff_ms=15000 enable_compactor=true hash_algo=MD5
2024.08.14 10:51:23.792411 [ 14450641 ] {} <Information> BackgroundSchedulePool/NLogSched: Create BackgroundSchedulePool with 10 threads
2024.08.14 10:51:23.792586 [ 14450641 ] {} <Information> NativeLog: Starting
2024.08.14 10:51:23.797532 [ 14450641 ] {} <Information> metastore: Init metastore in dir=./proton-data/nativelog/meta/
2024.08.14 10:51:23.806465 [ 14450641 ] {} <Information> LogManager: Loading logs from log_dirs=/Users/jameshao/proton-data/nativelog/log
2024.08.14 10:51:23.806528 [ 14450641 ] {} <Information> LogManager: Attempting recovery for all logs in /Users/jameshao/proton-data/nativelog/log since no clean shutdown file was found
2024.08.14 10:51:23.806637 [ 14450641 ] {} <Information> LogManager: Loaded 0 logs in 0ms
2024.08.14 10:51:23.806668 [ 14450641 ] {} <Information> NativeLog: Started
2024.08.14 10:51:23.806800 [ 14450641 ] {} <Information> KafkaWALPool: Kafka config section=cluster_settings.logstore.kafka is disabled, ignore it
2024.08.14 10:51:23.808778 [ 14450641 ] {} <Information> DatabaseAtomic (system): Metadata processed, database system has 0 tables and 0 dictionaries in total.
2024.08.14 10:51:23.808802 [ 14450641 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 0.000109 sec
2024.08.14 10:51:23.808814 [ 14450641 ] {} <Information> TablesLoader: Loading 0 streams with 0 dependency level
2024.08.14 10:51:23.826918 [ 14450641 ] {} <Information> DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal.
2024.08.14 10:51:23.828610 [ 14450641 ] {} <Information> DatabaseAtomic (default): Metadata processed, database default has 0 tables and 0 dictionaries in total.
2024.08.14 10:51:23.828661 [ 14450641 ] {} <Information> DatabaseAtomic (neutron): Metadata processed, database neutron has 0 tables and 0 dictionaries in total.
2024.08.14 10:51:23.828672 [ 14450641 ] {} <Information> TablesLoader: Parsed metadata of 0 tables in 2 databases in 0.00014 sec
2024.08.14 10:51:23.828680 [ 14450641 ] {} <Information> TablesLoader: Loading 0 streams with 0 dependency level
2024.08.14 10:51:23.828695 [ 14450641 ] {} <Information> DatabaseAtomic (default): Starting up tables.
2024.08.14 10:51:23.828705 [ 14450641 ] {} <Information> DatabaseAtomic (neutron): Starting up tables.
2024.08.14 10:51:23.828716 [ 14450641 ] {} <Information> DatabaseAtomic (system): Starting up tables.
2024.08.14 10:51:23.832930 [ 14450641 ] {} <Information> UserDefinedSQLObjectsLoaderFromDisk: Loading user defined objects from /Users/jameshao/proton-data/user_defined/
2024.08.14 10:51:23.833089 [ 14450641 ] {} <Information> Application: Query Profiler and TraceCollector are disabled because they require PHDR cache to be created (otherwise the function 'dl_iterate_phdr' is not lock free and not async-signal safe).
2024.08.14 10:51:23.835622 [ 14450641 ] {} <Information> DNSCacheUpdater: Update period 15 seconds
2024.08.14 10:51:23.835687 [ 14450641 ] {} <Information> Application: Available RAM: 32.00 GiB; physical cores: 16; logical cores: 16.
2024.08.14 10:51:23.835798 [ 14450641 ] {} <Information> Application: Listening for http://[::1]:3218
2024.08.14 10:51:23.835832 [ 14450641 ] {} <Information> Application: Listening for http://[::1]:8123
2024.08.14 10:51:23.835859 [ 14450641 ] {} <Information> Application: Listening for native protocol (tcp): [::1]:8463
2024.08.14 10:51:23.835887 [ 14450641 ] {} <Information> Application: Listening for snapshot server (tcp): [::1]:7587
2024.08.14 10:51:23.835922 [ 14450641 ] {} <Information> Application: Listening for PostgreSQL compatibility protocol: [::1]:5432
2024.08.14 10:51:23.835949 [ 14450641 ] {} <Information> Application: Listening for Prometheus: http://[::1]:9363
2024.08.14 10:51:23.835973 [ 14450641 ] {} <Information> Application: Listening for http://0.0.0.0:3218
2024.08.14 10:51:23.835998 [ 14450641 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2024.08.14 10:51:23.836049 [ 14450641 ] {} <Information> Application: Listening for native protocol (tcp): 0.0.0.0:8463
2024.08.14 10:51:23.836075 [ 14450641 ] {} <Information> Application: Listening for snapshot server (tcp): 0.0.0.0:7587
2024.08.14 10:51:23.836099 [ 14450641 ] {} <Information> Application: Listening for PostgreSQL compatibility protocol: 0.0.0.0:5432
2024.08.14 10:51:23.836123 [ 14450641 ] {} <Information> Application: Listening for Prometheus: http://0.0.0.0:9363
2024.08.14 10:51:23.836134 [ 14450641 ] {} <Information> Application: Ready for connections.
2024.08.14 10:51:25.608333 [ 14450800 ] {} <Information> TelemetryCollector: Telemetry sent successfully.
^C2024.08.14 10:51:25.687693 [ 14450665 ] {} <Information> Application: Received termination signal (Interrupt: 2)
2024.08.14 10:51:28.598609 [ 14450641 ] {} <Information> Application: Closed all listening sockets.
2024.08.14 10:51:28.598679 [ 14450641 ] {} <Information> CheckpointCoordinator: Trigger last checkpoint and flush begin
2024.08.14 10:51:28.598714 [ 14450641 ] {} <Information> CheckpointCoordinator: Trigger last checkpoint and flush end (elapsed 0 milliseconds)
2024.08.14 10:51:28.598748 [ 14450641 ] {} <Information> Application: Closed connections.
2024.08.14 10:51:31.852883 [ 14450641 ] {} <Information> Application: Shutting down storages.
2024.08.14 10:51:31.852987 [ 14450641 ] {} <Information> TelemetryCollector: Stopped
2024.08.14 10:51:31.853042 [ 14450641 ] {} <Information> Context: Shutdown disk default
2024.08.14 10:51:32.299358 [ 14450641 ] {} <Information> Application: Closed all listening sockets.
2024.08.14 10:51:32.299419 [ 14450641 ] {} <Information> Application: Closed connections to servers for tables.
2024.08.14 10:51:32.301390 [ 14450641 ] {} <Information> RaftInstance: shutting down raft core
2024.08.14 10:51:32.301430 [ 14450641 ] {} <Information> RaftInstance: sent stop signal to the commit thread.
2024.08.14 10:51:32.301443 [ 14450641 ] {} <Information> RaftInstance: cancelled all schedulers.
2024.08.14 10:51:32.301478 [ 14450641 ] {} <Information> RaftInstance: commit thread stopped.
2024.08.14 10:51:32.301518 [ 14450641 ] {} <Information> RaftInstance: all pending commit elements dropped.
2024.08.14 10:51:32.301530 [ 14450641 ] {} <Information> RaftInstance: reset all pointers.
2024.08.14 10:51:32.301558 [ 14450641 ] {} <Information> RaftInstance: joined terminated commit thread.
2024.08.14 10:51:32.301589 [ 14450730 ] {} <Information> RaftInstance: bg append_entries thread terminated
2024.08.14 10:51:32.301626 [ 14450641 ] {} <Information> RaftInstance: sent stop signal to background append thread.
2024.08.14 10:51:32.301668 [ 14450641 ] {} <Information> RaftInstance: clean up auto-forwarding queue: 0 elems
2024.08.14 10:51:32.301684 [ 14450641 ] {} <Information> RaftInstance: clean up auto-forwarding clients
2024.08.14 10:51:32.301701 [ 14450641 ] {} <Information> RaftInstance: raft_server shutdown completed.
2024.08.14 10:51:32.301814 [ 14450719 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 15
2024.08.14 10:51:32.301847 [ 14450723 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 4
2024.08.14 10:51:32.301867 [ 14450715 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.301883 [ 14450727 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.301917 [ 14450724 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 12
2024.08.14 10:51:32.301931 [ 14450728 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.301854 [ 14450713 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 4
2024.08.14 10:51:32.301823 [ 14450722 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 13
2024.08.14 10:51:32.301892 [ 14450725 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.301849 [ 14450714 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 4
2024.08.14 10:51:32.301821 [ 14450721 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 14
2024.08.14 10:51:32.301926 [ 14450720 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.301875 [ 14450717 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.302003 [ 14450726 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.302010 [ 14450718 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.302043 [ 14450716 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2024.08.14 10:51:32.811822 [ 14450641 ] {} <Information> Application: shutting down
2024.08.14 10:51:32.812315 [ 14450665 ] {} <Information> BaseDaemon: Stop SignalListener thread
2024.08.14 10:51:32.814215 [ 14450641 ] {} <Information> KafkaWALPool: Stopping
2024.08.14 10:51:32.814238 [ 14450641 ] {} <Information> KafkaWALPool: Stopped
2024.08.14 10:51:32.814246 [ 14450641 ] {} <Information> KafkaWALPool: dtored
2024.08.14 10:51:32.814254 [ 14450641 ] {} <Information> NativeLog: Stopping
2024.08.14 10:51:32.814963 [ 14450641 ] {} <Information> LogManager: Shutting down
2024.08.14 10:51:32.814984 [ 14450641 ] {} <Information> LogManager: Flushing and closing logs in /Users/jameshao/proton-data/nativelog/log
2024.08.14 10:51:32.815745 [ 14450641 ] {} <Information> LogManager: Shutdown completed
2024.08.14 10:51:32.815769 [ 14450641 ] {} <Information> NativeLog: Stopped
2024.08.14 10:51:32.815777 [ 14450641 ] {} <Information> NativeLog: dtored
  1. shutdown the proton server and then try proton server --daemon and then check if the daemon starts, failed.
jameshao@JamesdeMacBook-Pro ~ % proton server --daemon
jameshao@JamesdeMacBook-Pro ~ % ps -ef |grep proton
  501 51130 34669   0 10:51AM ttys004    0:00.00 grep proton
jameshao@JamesdeMacBook-Pro ~ %
  1. check on linux, proton server --daemon failed too
ubuntu@proton2:~/ossproton$ curl https://install.timeplus.com/oss | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2587  100  2587    0     0  24177      0 --:--:-- --:--:-- --:--:-- 24177
Downloading proton...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  617M  100  617M    0     0  37.4M      0  0:00:16  0:00:16 --:--:-- 35.6M
Download and permission setting completed: proton

To interact with Proton:
1. Start the Proton server(data store in current folder ./proton-data/ ):
   ./proton server

2. In a separate terminal, connect to the server:
   ./proton client
   (Note: If you encounter a 'connection refused' error, use: ./proton client --host 127.0.0.1)

3. To terminate the server, press ctrl+c in the server terminal.

For detailed usage and more information, check out the Timeplus documentation:
https://docs.timeplus.com/

You can also install it(data store in /var/lib/proton/):
    sudo ./proton install
ubuntu@proton2:~/ossproton$ ls -lah
total 618M
drwxrwxr-x 2 ubuntu ubuntu 4.0K Aug 14 03:20 .
drwxr-xr-x 9 ubuntu ubuntu 4.0K Aug 14 03:19 ..
-rwxrw-r-- 1 ubuntu ubuntu 618M Aug 14 03:20 proton
ubuntu@proton2:~/ossproton$ ./proton server --daemon
ubuntu@proton2:~/ossproton$ ps -ef|grep proton
ubuntu      2761    2496  0 03:21 pts/0    00:00:00 grep --color=auto proton
@jhao0117 jhao0117 added the bug Something isn't working label Aug 14, 2024
@jhao0117 jhao0117 changed the title proton server --daemon broken proton server --daemon does not work Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants