#文件上传 #HTTP 文件 #上传 #文件 #套接字 #CLI #Web 服务器

应用程序 http_file_uploader

基于 Axum 的 HTTP 服务器,专注于使用 multipart/form-data 上传文件,将它们保存到文件、标准输出或子进程中

2 个不稳定版本

0.2.0 2022 年 10 月 27 日
0.1.0 2022 年 10 月 23 日

#966HTTP 服务器

MIT/Apache

44KB
797

http_file_uploader

一个简单的底层 Web 服务器,用于使用一些适合 shell 脚本的功能提供文件上传。Web 的 multipart/form-data 文件上传世界与 UNIX 风格的文件和命令行世界之间的桥梁。在精神上与 CGI 相似,但缺乏灵活性(希望以安全性代替)。

HTTP 监听功能

  • 监听 TCP 地址上的连接
  • 监听 UNIX 路径上的连接
  • 使用 stdin/stdout 服务器一个连接,inetd 风格
  • 接受预绑定的 TCP 或 UNIX 套接字上的连接,例如来自 SystemD 套接字激活服务
  • 解码 multipart 请求正文,从其中选择特定的字段(或可选地直接流式传输整个请求正文)。或者只需在每个请求上启动程序(甚至没有正文)。

目标功能

  • 将接收到的文件输出到 stdout
  • 将文件保存到文件系统(如果上传成功,可能将其移动到其他位置)
  • 启动外部程序来处理上传。数据将出现在 stdin 上。

有关选项的完整列表,请参阅下面的“CLI 使用说明”部分。

大多数 HTTP 请求参数都被忽略 - 它只关心传入的数据。使用 nginx/caddy 按照您喜欢的样式过滤和修改请求和响应。您可以选择要处理的表单字段,但其他字段将被忽略。

安装

使用 cargo install --path . 从源代码构建它,使用 cargo install http_file_uploader 从 crates.io 安装,或从 GitHub 发布 下载预构建版本。

示例

$ httpfileuploader -l 127.0.0.1:8080 --stdout |
Incoming connection from 127.0.0.1:32876      | $ curl http://127.0.0.1:8080/ --form aaa=www
www                                           | Upload successful

$ httpfileuploade  -l 127.0.0.1:8080 -B -c --url -- echo
Incoming connection from 127.0.0.1:46750      | $ curl http://127.0.0.1:8080/asd?fgh
                                              | /asd?fgh

$ httpfileuploader  -l 127.0.0.1:8080 -o myupload.txt.tmp --rename-complete myupload.txt --once
Incoming connection from 127.0.0.1:48712      | $ curl http://127.0.0.1:8080 --form [email protected]
                                              | Upload successful
$ cmp myupload.txt Cargo.toml

$ http_file_uploader -l 127.0.0.1:1234 -r -P -I --cmdline -- stdbuf -oL /bin/rev&
$ nc 127.0.0.1 1234
POST / HTTP/1.0
Content-Length: 1000

HTTP/1.0 200 OK
date: Thu, 27 Oct 2022 13:26:14 GMT

Hello, world
dlrow ,olleH
12345
54321
^C

CLI 使用说明

http-file-uploader
  Special web server to allow shell scripts and other simple UNIX-ey programs to handle multipart/form-data HTTP  file uploads

ARGS:
    <argv>...
      Command line array for --cmdline option

OPTIONS:
    -l, --listen <addr>
      Bind and listen specified TCP socket

    -u, --unix <path>
      Optionally remove and bind this UNIX socket for listening incoming connections

    --inetd
      Read from HTTP request from stdin and write HTTP response to stdout

    --accept-tcp
      Expect file descriptor 0 (or specified) to be pre-bound listening TCP socket e.g. from systemd's socket activation
      You may want to specify `--fd 3` for systemd

    --accept-unix
      Expect file descriptor 0 (or specified) to be pre-bound listening UNIX socket e.g. from systemd's socket activation
      You may want to specify `--fd 3` for systemd

    --fd <fd>
      File descriptor to use for --inetd or --accept-... modes instead of 0.

    --once
      Serve only one successful upload, then exit.
      Failed child process executions are not considered as unsuccessful uploads for `--once` purposes, only invalid HTTP requests.
      E.g. trying to write to /dev/full does exit with --once, but failure to open --output file does not.

    -O, --stdout
      Dump contents of the file being uploaded to stdout.

    -o, --output <path>
      Save the file to specified location and overwrite it for each new upload

    -p, --program <path>
      Execute specified program each time the upload starts, without CLI parameters by default and file content as in stdin
      On UNIX, SIGINT is sent to the process if upload is terminated prematurely

    -c, --cmdline
      Execute command line (after --) each time the upload starts. URL is not propagated. Uploaded file content is in stdin.
      On UNIX, SIGINT is sent to the process if upload is terminated prematurely

    -n, --name <field_name>
      Restrict multipart field to specified name instead of taking first encountred field.

    -r, --require-upload
      Require a file to be uploaded, otherwise failing the request.

    -L, --parallelism
      Allow multiple uploads simultaneously without any limit

    -j, --parallelism-limit <limit>
      Limit number of upload-serving processes running in parallel.
      You may want to also use -Q option

    -Q, --queue <len>
      Number of queued waiting requests before starting failing them with 429. Default is no queue.
      Note that single TCP connection can issue multiple requests in parallel, filling up the queue.

    -B, --buffer-child-stdout
      Buffer child process output to return it to HTTP client as text/plain

    -I, --pipe
      Don't bother calculating Content-Length, instead pipe child process's stdout to HTTP reply chunk by chunk

    --remove-incomplete
      Remove --output file if the upload was interrupted

    --rename-complete <path>
      Move --output's file to new path after the upload is fully completed

    -U, --url
      Append request URL as additional command line parameter

    --url-base64
      Append request URL as additional command line parameter, base64-encoded

    -q, --quiet
      Do not announce new connections

    -P, --allow-nonmultipart
      Allow plain, non-multipart/form-data requests (and stream body chunks instead of form field's chunks)

    --no-multipart
      Don't try to decode multipart/form-data content, just stream request body as is always.

    -M, --method
      Append HTTP request method to the command line parameters (before --url if specified)

    -h, --help
      Prints help information.

依赖项

~12–23MB
~380K SLoC