1 个不稳定版本
0.1.0 | 2024年5月8日 |
---|
#476 在 文件系统
425KB
11K SLoC
Helyim
seaweedfs 使用纯 Rust 实现。
特性
附加特性
- 可选择无副本或不同副本级别,对机架和数据中心有感知。
- 自动压缩,在删除或更新后回收磁盘空间。
- 自动主服务器故障转移 - 无单点故障(SPOF)。
- 纠删码 用于温暖存储,Rack-Aware 10.4 纠删码降低存储成本。
用法
默认情况下,主节点运行在端口 9333 上,卷节点运行在端口 8080 上。让我们启动一个主节点和一个卷节点,端口为 8080。理想情况下,它们应该从不同的机器上启动。我们将使用 localhost 作为示例。
Helyim 使用 HTTP REST 操作进行读取、写入和删除。响应为 JSON 或 JSONP 格式。
1. 启动主服务器
cargo run --bin helyim master
2. 启动卷服务器
cargo run --bin helyim volume --port 8080 --folders ./vdata:70 --folders ./v1data:10
3. 写文件
要上传文件:首先,向 /dir/assign
发送 HTTP POST、PUT 或 GET 请求以获取 fid
和卷服务器 URL
curl https://127.0.0.1:9333/dir/assign
{"fid":"6,16b7578a5","url":"127.0.0.1:8080","public_url":"127.0.0.1:8080","count":1,"error":""}
其次,要存储文件内容,从响应中发送 HTTP 多部分 POST 请求到 url + '/' + fid
curl -F file=@./sun.jpg http://127.0.0.1:8080/6,16b7578a5
{"name":"sun.jpg","size":1675569,"error":""}
要更新,发送另一个带有更新文件内容的 POST 请求。
要删除,向同一 url + '/' + fid
URL 发送 HTTP DELETE 请求
curl -X DELETE http://127.0.0.1:8080/6,16b7578a5
主服务器故障转移
在初始化 Raft 集群时,启动 Leader 和 Follower 实例时需要指定相同的节点序列。
您可以通过访问 https://127.0.0.1:9333/cluster/status
来查看集群状态。
# start master1
cargo run --release --bin helyim -- master --ip 127.0.0.1 --port 9333 \
--peers 127.0.0.1:9333 \
--peers 127.0.0.1:9335 \
--peers 127.0.0.1:9337
# start master2
cargo run --release --bin helyim -- master --ip 127.0.0.1 --port 9335 \
--peers 127.0.0.1:9333 \
--peers 127.0.0.1:9335 \
--peers 127.0.0.1:9337
# start master3
cargo run --release --bin helyim -- master --ip 127.0.0.1 --port 9337 \
--peers 127.0.0.1:9333 \
--peers 127.0.0.1:9335 \
--peers 127.0.0.1:9337
基准测试
在联想 IdeaPad Pro 16 (2023) 笔记本电脑上的结果,带有 SSD,CPU:14 英特尔酷睿 i9 5.4GHz。
似乎比 seaweedfs
慢,尤其是在读取方面。
➜ ./weed benchmark -server=localhost:9333
This is SeaweedFS version 0.76 linux amd64
------------ Writing Benchmark ----------
Completed 15199 of 1048576 requests, 1.4% 15198.1/s 15.3MB/s
Completed 31887 of 1048576 requests, 3.0% 16687.9/s 16.8MB/s
Completed 48439 of 1048576 requests, 4.6% 16551.6/s 16.7MB/s
...
Completed 994044 of 1048576 requests, 94.8% 16645.2/s 16.8MB/s
Completed 1010800 of 1048576 requests, 96.4% 16755.8/s 16.9MB/s
Completed 1027412 of 1048576 requests, 98.0% 16612.2/s 16.7MB/s
Completed 1044319 of 1048576 requests, 99.6% 16907.0/s 17.0MB/s
Concurrency Level: 16
Time taken for tests: 63.249 seconds
Complete requests: 1048576
Failed requests: 0
Total transferred: 1106759553 bytes
Requests per second: 16578.50 [#/sec]
Transfer rate: 17088.29 [Kbytes/sec]
Connection Times (ms)
min avg max std
Total: 0.1 0.9 29.8 0.4
Percentage of the requests served within a certain time (ms)
50% 0.9 ms
66% 1.0 ms
75% 1.1 ms
90% 1.3 ms
95% 1.5 ms
98% 1.7 ms
99% 1.8 ms
100% 29.8 ms
------------ Randomly Reading Benchmark ----------
Completed 89963 of 1048576 requests, 8.6% 89957.6/s 90.5MB/s
Completed 187560 of 1048576 requests, 17.9% 97597.1/s 98.2MB/s
Completed 283486 of 1048576 requests, 27.0% 95925.8/s 96.6MB/s
Completed 382035 of 1048576 requests, 36.4% 98549.4/s 99.2MB/s
Completed 480649 of 1048576 requests, 45.8% 98613.9/s 99.3MB/s
Completed 583585 of 1048576 requests, 55.7% 102933.7/s 103.6MB/s
Completed 683954 of 1048576 requests, 65.2% 100370.9/s 101.0MB/s
Completed 782522 of 1048576 requests, 74.6% 98567.9/s 99.2MB/s
Completed 883504 of 1048576 requests, 84.3% 100982.7/s 101.7MB/s
Completed 987320 of 1048576 requests, 94.2% 103814.3/s 104.5MB/s
Concurrency Level: 16
Time taken for tests: 10.600 seconds
Complete requests: 1048576
Failed requests: 0
Total transferred: 1106777459 bytes
Requests per second: 98925.73 [#/sec]
Transfer rate: 101969.36 [Kbytes/sec]
Connection Times (ms)
min avg max std
Total: 0.0 0.1 2.3 0.1
Percentage of the requests served within a certain time (ms)
50% 0.1 ms
95% 0.2 ms
98% 0.4 ms
100% 2.3 ms
致谢
依赖项
~37–53MB
~1M SLoC